aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1904.09757 | 2938908220 | This paper proposes a novel Non-Local Attention Optimized Deep Image Compression (NLAIC) framework, which is built on top of the popular variational auto-encoder (VAE) structure. Our NLAIC framework embeds non-local operations in the encoders and decoders for both image and latent feature probability information (known as hyperprior) to capture both local and global correlations, and apply attention mechanism to generate masks that are used to weigh the features for the image and hyperprior, which implicitly adapt bit allocation for different features based on their importance. Furthermore, both hyperpriors and spatial-channel neighbors of the latent features are used to improve entropy coding. The proposed model outperforms the existing methods on Kodak dataset, including learned (e.g., Balle2019, Balle2018) and conventional (e.g., BPG, JPEG2000, JPEG) image compression methods, for both PSNR and MS-SSIM distortion metrics. | Self Attention. Self-attention mechanism is widely used in deep learning based natural language processing (NLP) @cite_26 @cite_14 @cite_8 . It can be described as a mapping strategy which queries a set of key-value pairs to an output. For example, Vaswani et. al @cite_8 have proposed multi-headed attention methods which are extensively used for machine translation. For those low-level vision tasks @cite_12 @cite_4 @cite_2 , self-attention mechanism makes generated features with spatial adaptive activation and enables adaptive information allocation with the emphasis on more challenging areas (i.e., rich textures, saliency, etc). | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_4",
"@cite_8",
"@cite_2",
"@cite_12"
],
"mid": [
"2229833550",
"2949335953",
"2951675964",
"2963403868",
"2962891349",
"2950217418"
],
"abstract": [
"We propose multi-way, multilingual neural machine translation. The proposed approach enables a single neural translation model to translate between multiple languages, with a number of parameters that grows only linearly with the number of languages. This is made possible by having a single attention mechanism that is shared across all language pairs. We train the proposed multi-way, multilingual model on ten language pairs from WMT'15 simultaneously and observe clear performance improvements over models trained on only one language pair. In particular, we observe that the proposed model significantly improves the translation quality of low-resource language pairs.",
"An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.",
"Lossy image compression is generally formulated as a joint rate-distortion optimization to learn encoder, quantizer, and decoder. However, the quantizer is non-differentiable, and discrete entropy estimation usually is required for rate control. These make it very challenging to develop a convolutional network (CNN)-based image compression system. In this paper, motivated by that the local information content is spatially variant in an image, we suggest that the bit rate of the different parts of the image should be adapted to local content. And the content aware bit rate is allocated under the guidance of a content-weighted importance map. Thus, the sum of the importance map can serve as a continuous alternative of discrete entropy estimation to control compression rate. And binarizer is adopted to quantize the output of encoder due to the binarization scheme is also directly defined by the importance map. Furthermore, a proxy function is introduced for binary operation in backward propagation to make it differentiable. Therefore, the encoder, decoder, binarizer and importance map can be jointly optimized in an end-to-end manner by using a subset of the ImageNet database. In low bit rate image compression, experiments show that our system significantly outperforms JPEG and JPEG 2000 by structural similarity (SSIM) index, and can produce the much better visual result with sharp edges, rich textures, and fewer artifacts.",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.",
"Deep Neural Networks trained as image auto-encoders have recently emerged as a promising direction for advancing the state-of-the-art in image compression. The key challenge in learning such networks is twofold: To deal with quantization, and to control the trade-off between reconstruction error (distortion) and entropy (rate) of the latent image representation. In this paper, we focus on the latter challenge and propose a new technique to navigate the rate-distortion trade-off for an image compression auto-encoder. The main idea is to directly model the entropy of the latent representation by using a context model: A 3D-CNN which learns a conditional probability model of the latent distribution of the auto-encoder. During training, the auto-encoder makes use of the context model to estimate the entropy of its representation, and the context model is concurrently updated to learn the dependencies between the symbols in the latent representation. Our experiments show that this approach, when measured in MS-SSIM, yields a state-of-the-art image compression system based on a simple convolutional auto-encoder.",
"In this paper, we propose a residual non-local attention network for high-quality image restoration. Without considering the uneven distribution of information in the corrupted images, previous methods are restricted by local convolutional operation and equal treatment of spatial- and channel-wise features. To address this issue, we design local and non-local attention blocks to extract features that capture the long-range dependencies between pixels and pay more attention to the challenging parts. Specifically, we design trunk branch and (non-)local mask branch in each (non-)local attention block. The trunk branch is used to extract hierarchical features. Local and non-local mask branches aim to adaptively rescale these hierarchical features with mixed attentions. The local mask branch concentrates on more local structures with convolutional operations, while non-local attention considers more about long-range dependencies in the whole feature map. Furthermore, we propose residual local and non-local attention learning to train the very deep network, which further enhance the representation ability of the network. Our proposed method can be generalized for various image restoration applications, such as image denoising, demosaicing, compression artifacts reduction, and super-resolution. Experiments demonstrate that our method obtains comparable or better results compared with recently leading methods quantitatively and visually."
]
} |
1904.09757 | 2938908220 | This paper proposes a novel Non-Local Attention Optimized Deep Image Compression (NLAIC) framework, which is built on top of the popular variational auto-encoder (VAE) structure. Our NLAIC framework embeds non-local operations in the encoders and decoders for both image and latent feature probability information (known as hyperprior) to capture both local and global correlations, and apply attention mechanism to generate masks that are used to weigh the features for the image and hyperprior, which implicitly adapt bit allocation for different features based on their importance. Furthermore, both hyperpriors and spatial-channel neighbors of the latent features are used to improve entropy coding. The proposed model outperforms the existing methods on Kodak dataset, including learned (e.g., Balle2019, Balle2018) and conventional (e.g., BPG, JPEG2000, JPEG) image compression methods, for both PSNR and MS-SSIM distortion metrics. | In image compression, quantized attention masks are commonly used for adaptive bit allocation, e.g., Li et. al @cite_4 uses 3 layers of local convolutions and Mentzer et. al @cite_2 selects one of the quantized features. Unfortunately, these methods require the extra explicit signaling overhead. Our model adopts attention mechanism that is close to @cite_4 @cite_2 but applies multiple layers of non-local as well as convolutional operations to automatically generate attention masks from the input image. The attention masks are applied to the temporary latent features directly to generate the final latent features to be coded. Thus, there is no need to use extra bits to code the masks. | {
"cite_N": [
"@cite_4",
"@cite_2"
],
"mid": [
"2951675964",
"2962891349"
],
"abstract": [
"Lossy image compression is generally formulated as a joint rate-distortion optimization to learn encoder, quantizer, and decoder. However, the quantizer is non-differentiable, and discrete entropy estimation usually is required for rate control. These make it very challenging to develop a convolutional network (CNN)-based image compression system. In this paper, motivated by that the local information content is spatially variant in an image, we suggest that the bit rate of the different parts of the image should be adapted to local content. And the content aware bit rate is allocated under the guidance of a content-weighted importance map. Thus, the sum of the importance map can serve as a continuous alternative of discrete entropy estimation to control compression rate. And binarizer is adopted to quantize the output of encoder due to the binarization scheme is also directly defined by the importance map. Furthermore, a proxy function is introduced for binary operation in backward propagation to make it differentiable. Therefore, the encoder, decoder, binarizer and importance map can be jointly optimized in an end-to-end manner by using a subset of the ImageNet database. In low bit rate image compression, experiments show that our system significantly outperforms JPEG and JPEG 2000 by structural similarity (SSIM) index, and can produce the much better visual result with sharp edges, rich textures, and fewer artifacts.",
"Deep Neural Networks trained as image auto-encoders have recently emerged as a promising direction for advancing the state-of-the-art in image compression. The key challenge in learning such networks is twofold: To deal with quantization, and to control the trade-off between reconstruction error (distortion) and entropy (rate) of the latent image representation. In this paper, we focus on the latter challenge and propose a new technique to navigate the rate-distortion trade-off for an image compression auto-encoder. The main idea is to directly model the entropy of the latent representation by using a context model: A 3D-CNN which learns a conditional probability model of the latent distribution of the auto-encoder. During training, the auto-encoder makes use of the context model to estimate the entropy of its representation, and the context model is concurrently updated to learn the dependencies between the symbols in the latent representation. Our experiments show that this approach, when measured in MS-SSIM, yields a state-of-the-art image compression system based on a simple convolutional auto-encoder."
]
} |
1904.09757 | 2938908220 | This paper proposes a novel Non-Local Attention Optimized Deep Image Compression (NLAIC) framework, which is built on top of the popular variational auto-encoder (VAE) structure. Our NLAIC framework embeds non-local operations in the encoders and decoders for both image and latent feature probability information (known as hyperprior) to capture both local and global correlations, and apply attention mechanism to generate masks that are used to weigh the features for the image and hyperprior, which implicitly adapt bit allocation for different features based on their importance. Furthermore, both hyperpriors and spatial-channel neighbors of the latent features are used to improve entropy coding. The proposed model outperforms the existing methods on Kodak dataset, including learned (e.g., Balle2019, Balle2018) and conventional (e.g., BPG, JPEG2000, JPEG) image compression methods, for both PSNR and MS-SSIM distortion metrics. | Image Compression Architectures. DNN based image compression generally relies on well-known autoencoders. Its back propagation scheme requires all the steps differentiable in an end-to-end manner. Several methods (e.g., adding uniform noise @cite_11 , replacing the direct derivative with the derivative of the expectation @cite_21 and soft-to-hard quantization @cite_17 ) are developed to approximate the non-differentiable quantization process. On the other hand, entropy rate modeling of quantized latent features is another critical issue for learned image compression. PixelCNNs @cite_1 and VAE are commonly used for entropy estimation following the Bayesian generative rules. Recently, conditional probability estimates based on autoregressive neighbors of the latent feature maps and hyperpriors jointly has shown significant improvement in entropy coding. | {
"cite_N": [
"@cite_21",
"@cite_1",
"@cite_17",
"@cite_11"
],
"mid": [
"2516038988",
"2953318193",
"2964164354",
"2552465432"
],
"abstract": [
"This paper presents a set of full-resolution lossy image compression methods based on neural networks. Each of the architectures we describe can provide variable compression rates during deployment without requiring retraining of the network: each network need only be trained once. All of our architectures consist of a recurrent neural network (RNN)-based encoder and decoder, a binarizer, and a neural network for entropy coding. We compare RNN types (LSTM, associative LSTM) and introduce a new hybrid of GRU and ResNet. We also study \"one-shot\" versus additive reconstruction architectures and introduce a new scaled-additive framework. We compare to previous work, showing improvements of 4.3 -8.8 AUC (area under the rate-distortion curve), depending on the perceptual metric used. As far as we know, this is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding.",
"Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.",
"We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both.",
"We describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions, the relaxed loss function may be interpreted as the log likelihood of a generative model, as implemented by a variational autoencoder. Unlike these models, however, the compression model must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM."
]
} |
1904.09763 | 2951417413 | In this paper, we propose a novel algorithm to rectify illumination of the digitized documents by eliminating shading artifacts. Firstly, a topographic surface of an input digitized document is created using luminance value of each pixel. Then the shading artifact on the document is estimated by simulating an immersion process. The simulation of the immersion process is modeled using a novel diffusion equation with an iterative update rule. After estimating the shading artifacts, the digitized document is reconstructed using the Lambertian surface model. In order to evaluate the performance of the proposed algorithm, we conduct rigorous experiments on a set of digitized documents which is generated using smartphones under challenging lighting conditions. According to the experimental results, it is found that the proposed method produces promising illumination correction results and outperforms the results of the state-of-the-art methods. | Illumination distortion on camera generate digitized documents of uneven surface is addressed in the literature using the 3D reconstruction models. Lu al @cite_25 @cite_5 proposed a method for removing shading artifacts on digitized documents captured by cameras using the 3D shape reconstruction of the documents. In their method, the 3D shape of the document is reconstructed by fitting the illumination values to a polynomial surface. Meng al @cite_19 proposed another 3D shape reconstruction based method by isometric mesh construction assuming that the page shape is a general cylindrical surface (GCS). Assuming GCS form for camera captured digitized document images are proven to work well for modeling 3D shape of real documents @cite_4 . Tian al @cite_6 proposed another 3D reconstruction based method using text region on the digitized documents. In their method, the perspective distortion of the text region is estimated using text orientation and horizontal text line. Although their proposed methods are effective for correctly identify illumination distortions due to uneven surface of the target document, the method has a limitation in rectifying the shading artifacts on the digitized documents in general (e.g., shadow from light occluding object). | {
"cite_N": [
"@cite_4",
"@cite_6",
"@cite_19",
"@cite_5",
"@cite_25"
],
"mid": [
"236100371",
"2033628967",
"2170332640",
"2122929638",
"2156605212"
],
"abstract": [
"This paper presents a new document image dewarping method that removes geometric distortions in camera-captured document images. The proposed method does not directly use the text-line which has been the most widely used feature for the document dewarping. Instead, we use the discrete representation of text-lines and text-blocks which are the sets of connected components. Also, we model the geometric distortions caused by page curl and perspective view as the generalized cylindrical surfaces and camera rotation respectively. With these distortion models and the discrete representation of the features, we design a cost function whose minimization yields the parameters of the distortion model. In the cost function, we encode the properties of the pages such as text-block alignment, line-spacing, and the straightness of text-lines. By describing the text features using the sets of discrete points, the cost function can be easily defined and efficiently solved by Levenberg-Marquadt algorithm. Experiments show that the proposed method works well for the various layouts and curved surfaces, and compares favorably with the conventional methods on the standard dataset. HighlightsDocument dewarping is an important problem in camera-based OCR.We formulate the dewarping as an optimization problem.The proposed method performs dewarping in a fully automatic manner.The proposed method can handle various layouts of documents.Our method yields the improved OCR performances compared with other methods.",
"Distortions in images of documents, such as the pages of books, adversely affect the performance of optical character recognition (OCR) systems. Removing such distortions requires the 3D deformation of the document that is often measured using special and precisely calibrated hardware (stereo, laser range scanning or structured light). In this paper, we introduce a new approach that automatically reconstructs the 3D shape and rectifies a deformed text document from a single image. We first estimate the 2D distortion grid in an image by exploiting the line structure and stroke statistics in text documents. This approach does not rely on more noise-sensitive operations such as image binarization and character segmentation. The regularity in the text pattern is used to constrain the 2D distortion grid to be a perspective projection of a 3D parallelogram mesh. Based on this constraint, we present a new shape-from-texture method that computes the 3D deformation up to a scale factor using SVD. Unlike previous work, this formulation imposes no restrictions on the shape (e.g., a developable surface). The estimated shape is then used to remove both geometric distortions and photometric (shading) effects in the image. We demonstrate our techniques on documents containing a variety of languages, fonts and sizes.",
"In this paper, we propose a metric rectification method to restore an image from a single camera-captured document image. The core idea is to construct an isometric image mesh by exploiting the geometry of page surface and camera. Our method uses a general cylindrical surface (GCS) to model the curved page shape. Under a few proper assumptions, the printed horizontal text lines are shown to be line convergent symmetric. This property is then used to constrain the estimation of various model parameters under perspective projection. We also introduce a paraperspective projection to approximate the nonlinear perspective projection. A set of close-form formulas is thus derived for the estimate of GCS directrix and document aspect ratio. Our method provides a straightforward framework for image metric rectification. It is insensitive to camera positions, viewing angles, and the shapes of document pages. To evaluate the proposed method, we implemented comprehensive experiments on both synthetic and real-captured images. The results demonstrate the efficiency of our method. We also carried out a comparative experiment on the public CBDAR2007 data set. The experimental results show that our method outperforms the state-of-the-art methods in terms of OCR accuracy and rectification errors.",
"This paper presents a document image binarization technique that segments text from badly illuminated document images. Based on the observations that text documents normally lie over a planar or smoothly curved surface and have a uniformly colored background, badly illuminated document images are binarized by using a smoothing polynomial surface, which estimates the shading variation and compensates the shading degradation based on the estimated shading variation. Badly illuminated document images are accordingly binarized through the global thresholding of the compensated document images. Compared with the reported methods, the proposed technique is tolerant to the variations in text size and document contrast. At the same time, it is much faster and able to produce a binary text image with little background noise.",
"Document images often suffer from different types of degradation that renders the document image binarization a challenging task. This paper presents a document image binarization technique that segments the text from badly degraded document images accurately. The proposed technique is based on the observations that the text documents usually have a document background of the uniform color and texture and the document text within it has a different intensity level compared with the surrounding document background. Given a document image, the proposed technique first estimates a document background surface through an iterative polynomial smoothing procedure. Different types of document degradation are then compensated by using the estimated document background surface. The text stroke edge is further detected from the compensated document image by using L1-norm image gradient. Finally, the document text is segmented by a local threshold that is estimated based on the detected text stroke edges. The proposed technique was submitted to the recent document image binarization contest (DIBCO) held under the framework of ICDAR 2009 and has achieved the top performance among 43 algorithms that are submitted from 35 international research groups."
]
} |
1904.09763 | 2951417413 | In this paper, we propose a novel algorithm to rectify illumination of the digitized documents by eliminating shading artifacts. Firstly, a topographic surface of an input digitized document is created using luminance value of each pixel. Then the shading artifact on the document is estimated by simulating an immersion process. The simulation of the immersion process is modeled using a novel diffusion equation with an iterative update rule. After estimating the shading artifacts, the digitized document is reconstructed using the Lambertian surface model. In order to evaluate the performance of the proposed algorithm, we conduct rigorous experiments on a set of digitized documents which is generated using smartphones under challenging lighting conditions. According to the experimental results, it is found that the proposed method produces promising illumination correction results and outperforms the results of the state-of-the-art methods. | Digitized documents generated by scanners experience relatively less shading artifacts as the scanners use a single light source with an uniform light direction. For such digitized documents, the surface of the document is rendered smooth and constant. Such type of digitized document can be considered to have a parametric surface with various assumptions @cite_29 @cite_0 @cite_34 . Such assumptions lead to a straightforward reconstruction of the 3D shapes for digitized documents captured by scanners. The assumption based digitized document illumination correction techniques are not applicable for correcting generalized digitized documents. | {
"cite_N": [
"@cite_0",
"@cite_29",
"@cite_34"
],
"mid": [
"2170265032",
"2166169751",
"2169767329"
],
"abstract": [
"Scanning a document page from a thick bound volume often results in two kinds of distortions in the scanned image, i.e., shade along the \"spine\" of the book and warping in the shade area. In this paper, we propose an efficient restoration method based on the discovery of the 3D shape of a book surface from the shading information in a scanned document image. From a technical point of view, this shape from shading (SFS) problem in real-world environments is characterized by 1) a proximal and moving light source, 2) Lambertian reflection, 3) nonuniform albedo distribution, and 4) document skew. Taking all these factors into account, we first build practical models (consisting of a 3D geometric model and a 3D optical model) for the practical scanning conditions to reconstruct the 3D shape of the book surface. We next restore the scanned document image using this shape based on deshading and dewarping models. Finally, we evaluate the restoration results by comparing our estimated surface shape with the real shape as well as the OCR performance on original and restored document images. The results show that the geometric and photometric distortions are mostly removed and the OCR results are improved markedly.",
"A scanned image of an opened book page often suffers from various scanning artifacts known as scanning shading and dark borders noises. These artifacts will degrade the qualities of the scanned images and cause many problems to the subsequent process of document image analysis. In this paper, we propose an effective method to rectify these scanning artifacts. Our method comes from two observations: that the shading surface of most scanned book pages is quasi-concave and that the document contents are usually printed on a sheet of plain and bright paper. Based on these observations, a shading image can be accurately extracted via convex hulls-based image reconstruction. The proposed method proves to be surprisingly effective for image shading correction and dark borders removal. It can restore a desired shading-free image and meanwhile yield an illumination surface of high quality. More importantly, the proposed method is nonparametric and thus does not involve any user interactions or parameter fine-tuning. This would make it very appealing to nonexpert users in applications. Extensive experiments based on synthetic and real-scanned document images demonstrate the efficiency of the proposed method.",
"When one scans a document page from a thick bound volume, the curvature of the page to be scanned results in two kinds of distortion in the scanned document images: i) shade along the 'spine' of the book; and ii) warping in the shade area. In this paper, we propose an efficient restoration method based on the discovery of the 3D shape of a book surface from the shading information in a scanned document image. We first build practical models namely a 3D geometric model and a 3D optical model for the practical scanning conditions to reconstruct the 3D shape of book surface. We next restore the scanned document image using this shape based on de-shading and de-warping models. Finally, we evaluate the restoration results by comparing the OCR (optical character recognition) performance on the original and restored document images. The experiments show that the geometric and photometric distortions are mostly removed and the OCR results are improved markedly."
]
} |
1904.09605 | 2939807111 | Sparse reward is one of the biggest challenges in reinforcement learning (RL). In this paper, we propose a novel method called Generative Exploration and Exploitation (GENE) to overcome sparse reward. GENE dynamically changes the start state of agent to the generated novel state to encourage the agent to explore the environment or to the generated rewarding state to boost the agent to exploit the received reward signal. GENE relies on no prior knowledge about the environment and can be combined with any RL algorithm, no matter on-policy or off-policy, single-agent or multi-agent. Empirically, we demonstrate that GENE significantly outperforms existing methods in four challenging tasks with only binary rewards indicating whether or not the task is completed, including Maze, Goal Ant, Pushing, and Cooperative Navigation. The ablation studies verify that GENE can adaptively tradeoff between exploration and exploitation as the learning progresses by automatically adjusting the proportion between generated novel states and rewarding states, which is the key for GENE to solving these challenging tasks effectively and efficiently. | , The methods of experience replay focus on which experiences to store and how to use them to speed up the training. Prioritized experience replay @cite_1 measures the priority of experiences by the magnitude of their temporal-difference errors and replays transitions with high priority more frequently. andrychowicz2017hindsight ( andrychowicz2017hindsight ) proposed hindsight experience replay (HER) for goal-based tasks. HER is inspired by that one can learn almost as much from achieving an undesired outcome as from the desired one. After experiencing some episodes, HER arbitrarily selects a set of additional goals and uses them to replace the original goals of the transitions in the reply buffer. However, learning additional goals slows down the learning process and by random exploration the agent rarely sees a real reward signal. Moreover, HER only works on off-policy methods. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2201581102"
],
"abstract": [
"Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games."
]
} |
1904.09540 | 2950216376 | During the past few decades, knowledge bases (KBs) have experienced rapid growth. Nevertheless, most KBs still suffer from serious incompletion. Researchers proposed many tasks such as knowledge base completion and relation prediction to help build the representation of KBs. However, there are some issues unsettled towards enriching the KBs. Knowledge base completion and relation prediction assume that we know two elements of the fact triples and we are going to predict the missing one. This assumption is too restricted in practice and prevents it from discovering new facts directly. To address this issue, we propose a new task, namely, fact discovery from knowledge base. This task only requires that we know the head entity and the goal is to discover facts associated with the head entity. To tackle this new problem, we propose a novel framework that decomposes the discovery problem into several facet discovery components. We also propose a novel auto-encoder based facet component to estimate some facets of the fact. Besides, we propose a feedback learning component to share the information between each facet. We evaluate our framework using a benchmark dataset and the experimental results show that our framework achieves promising results. We also conduct extensive analysis of our framework in discovering different kinds of facts. The source code of this paper can be obtained from this https URL. | In recent years, many tasks @cite_27 have been proposed to help represent and enrich KBs. Tasks such as knowledge base completion (KBC) @cite_31 @cite_9 @cite_5 @cite_29 @cite_27 and relation prediction (RP) @cite_8 @cite_16 @cite_2 are widely studied and many models are proposed to improve the performance on these tasks. However, the intention of these tasks is to test the performance of models in representing KBs and thus they cannot be used directly to discover new facts of KBs. Moreover, our FDKB task is not a simple combination of the KBC and RP task since both of these two tasks require to know two of the triples while we assume we only know the head entity. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_29",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_16"
],
"mid": [
"2107598941",
"2283196293",
"2433281745",
"2759136286",
"2499696929",
"2250342289",
"",
"1426956448"
],
"abstract": [
"Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6 . We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression.",
"We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.",
"We model knowledge graphs for their completion by encoding each entity and relation into a numerical space. All previous work including Trans(E, H, R, and D) ignore the heterogeneity (some relations link many entity pairs and others do not) and the imbalance (the number of head entities and that of tail entities in a relation could be different) of knowledge graphs. In this paper, we propose a novel approach TranSparse to deal with the two issues. In TranSparse, transfer matrices are replaced by adaptive sparse matrices, whose sparse degrees are determined by the number of entities (or entity pairs) linked by relations. In experiments, we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages. We evaluate our approach on triplet classification and link prediction tasks. Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-of-the-art performance.",
"Knowledge graph (KG) embedding is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG. It can benefit a variety of downstream tasks such as KG completion and relation extraction, and hence has quickly gained massive attention. In this article, we provide a systematic review of existing techniques, including not only the state-of-the-arts but also those with latest trends. Particularly, we make the review based on the type of information used in the embedding task. Techniques that conduct embedding using only facts observed in the KG are first introduced. We describe the overall framework, specific model design, typical training procedures, as well as pros and cons of such techniques. After that, we discuss techniques that further incorporate additional information besides facts. We focus specifically on the use of entity types, relation paths, textual descriptions, and logical rules. Finally, we briefly introduce how KG embedding can be applied to and benefit a wide variety of downstream tasks such as KG completion, relation extraction, question answering, and so forth.",
"Representation learning (RL) of knowledge graphs aims to project both entities and relations into a continuous low-dimensional space. Most methods concentrate on learning representations with knowledge triples indicating relations between entities. In fact, in most knowledge graphs there are usually concise descriptions for entities, which cannot be well utilized by existing methods. In this paper, we propose a novel RL method for knowledge graphs taking advantages of entity descriptions. More specifically, we explore two encoders, including continuous bag-of-words and deep convolutional neural models to encode semantics of entity descriptions. We further learn knowledge representations with both triples and descriptions. We evaluate our method on two tasks, including knowledge graph completion and entity classification. Experimental results on real-world datasets show that, our method outperforms other baselines on the two tasks, especially under the zero-shot setting, which indicates that our method is capable of building representations for novel entities according to their descriptions. The source code of this paper can be obtained from https: github.com xrb92 DKRL.",
"Knowledge graphs are useful resources for numerous AI applications, but they are far from completeness. Previous work such as TransE, TransH and TransR CTransR regard a relation as translation from head entity to tail entity and the CTransR achieves state-of-the-art performance. In this paper, we propose a more fine-grained model named TransD, which is an improvement of TransR CTransR. In TransD, we use two vectors to represent a named symbol object (entity and relation). The first one represents the meaning of a(n) entity (relation), the other one is used to construct mapping matrix dynamically. Compared with TransR CTransR, TransD not only considers the diversity of relations, but also entities. TransD has less parameters and has no matrix-vector multiplication operations, which makes it can be applied on large scale graphs. In Experiments, we evaluate our model on two typical tasks including triplets classification and link prediction. Evaluation results show that our approach outperforms stateof-the-art methods.",
"",
"Representation learning of knowledge bases aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction."
]
} |
1904.09540 | 2950216376 | During the past few decades, knowledge bases (KBs) have experienced rapid growth. Nevertheless, most KBs still suffer from serious incompletion. Researchers proposed many tasks such as knowledge base completion and relation prediction to help build the representation of KBs. However, there are some issues unsettled towards enriching the KBs. Knowledge base completion and relation prediction assume that we know two elements of the fact triples and we are going to predict the missing one. This assumption is too restricted in practice and prevents it from discovering new facts directly. To address this issue, we propose a new task, namely, fact discovery from knowledge base. This task only requires that we know the head entity and the goal is to discover facts associated with the head entity. To tackle this new problem, we propose a novel framework that decomposes the discovery problem into several facet discovery components. We also propose a novel auto-encoder based facet component to estimate some facets of the fact. Besides, we propose a feedback learning component to share the information between each facet. We evaluate our framework using a benchmark dataset and the experimental results show that our framework achieves promising results. We also conduct extensive analysis of our framework in discovering different kinds of facts. The source code of this paper can be obtained from this https URL. | A common approach to solving these tasks is to build a knowledge base representation (KBR) model with different kinds of representations. Typically, one element of the triples is unknown. Then, all entities are iterated on the unknown element and the scores of all combinations of the triples are calculated and then sorted. Many works focusing on KBR attempt to encode both entities and relations into a low-dimensional semantic space. KBR models can be divided into two major categories, namely translation-based models and semantic matching models @cite_27 . | {
"cite_N": [
"@cite_27"
],
"mid": [
"2759136286"
],
"abstract": [
"Knowledge graph (KG) embedding is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG. It can benefit a variety of downstream tasks such as KG completion and relation extraction, and hence has quickly gained massive attention. In this article, we provide a systematic review of existing techniques, including not only the state-of-the-arts but also those with latest trends. Particularly, we make the review based on the type of information used in the embedding task. Techniques that conduct embedding using only facts observed in the KG are first introduced. We describe the overall framework, specific model design, typical training procedures, as well as pros and cons of such techniques. After that, we discuss techniques that further incorporate additional information besides facts. We focus specifically on the use of entity types, relation paths, textual descriptions, and logical rules. Finally, we briefly introduce how KG embedding can be applied to and benefit a wide variety of downstream tasks such as KG completion, relation extraction, question answering, and so forth."
]
} |
1904.09540 | 2950216376 | During the past few decades, knowledge bases (KBs) have experienced rapid growth. Nevertheless, most KBs still suffer from serious incompletion. Researchers proposed many tasks such as knowledge base completion and relation prediction to help build the representation of KBs. However, there are some issues unsettled towards enriching the KBs. Knowledge base completion and relation prediction assume that we know two elements of the fact triples and we are going to predict the missing one. This assumption is too restricted in practice and prevents it from discovering new facts directly. To address this issue, we propose a new task, namely, fact discovery from knowledge base. This task only requires that we know the head entity and the goal is to discover facts associated with the head entity. To tackle this new problem, we propose a novel framework that decomposes the discovery problem into several facet discovery components. We also propose a novel auto-encoder based facet component to estimate some facets of the fact. Besides, we propose a feedback learning component to share the information between each facet. We evaluate our framework using a benchmark dataset and the experimental results show that our framework achieves promising results. We also conduct extensive analysis of our framework in discovering different kinds of facts. The source code of this paper can be obtained from this https URL. | Translation-based models such as TransE @cite_31 achieves promising performance in KBC with good computational efficiency. TransE regards the relation in a triple as a translation between the embedding of head and tail entities. It means that TransE enforces that the head entity vector plus the relation vector approximates the tail entity vector to obtain entity and relation embeddings. However, TransE suffers from problems when dealing with 1-to-N, N-to-1 and N-to-N relations. To address this issue, TransH @cite_9 enables an entity to have distinct embeddings when involving in different relations. TransR @cite_28 models entities in entity space and uses transform matrices to map entities into different relation spaces when involving different relations. Then it performs translations in relation spaces. In addition, many other KBR models have also been proposed to deal with various characteristics of KBs, such as TransD @cite_5 , KG2E @cite_19 , PTransE @cite_16 , TranSparse @cite_29 . | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_29",
"@cite_19",
"@cite_5",
"@cite_31",
"@cite_16"
],
"mid": [
"2184957013",
"2283196293",
"2433281745",
"2073587810",
"2250342289",
"",
"1426956448"
],
"abstract": [
"Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.",
"We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.",
"We model knowledge graphs for their completion by encoding each entity and relation into a numerical space. All previous work including Trans(E, H, R, and D) ignore the heterogeneity (some relations link many entity pairs and others do not) and the imbalance (the number of head entities and that of tail entities in a relation could be different) of knowledge graphs. In this paper, we propose a novel approach TranSparse to deal with the two issues. In TranSparse, transfer matrices are replaced by adaptive sparse matrices, whose sparse degrees are determined by the number of entities (or entity pairs) linked by relations. In experiments, we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages. We evaluate our approach on triplet classification and link prediction tasks. Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-of-the-art performance.",
"The representation of a knowledge graph (KG) in a latent space recently has attracted more and more attention. To this end, some proposed models (e.g., TransE) embed entities and relations of a KG into a \"point\" vector space by optimizing a global loss function which ensures the scores of positive triplets are higher than negative ones. We notice that these models always regard all entities and relations in a same manner and ignore their (un)certainties. In fact, different entities and relations may contain different certainties, which makes identical certainty insufficient for modeling. Therefore, this paper switches to density-based embedding and propose KG2E for explicitly modeling the certainty of entities and relations, which learn the representations of KGs in the space of multi-dimensional Gaussian distributions. Each entity relation is represented by a Gaussian distribution, where the mean denotes its position and the covariance (currently with diagonal covariance) can properly represent its certainty. In addition, compared with the symmetric measures used in point-based methods, we employ the KL-divergence for scoring triplets, which is a natural asymmetry function for effectively modeling multiple types of relations. We have conducted extensive experiments on link prediction and triplet classification with multiple benchmark datasets (WordNet and Freebase). Our experimental results demonstrate that our method can effectively model the (un)certainties of entities and relations in a KG, and it significantly outperforms state-of-the-art methods (including TransH and TransR).",
"Knowledge graphs are useful resources for numerous AI applications, but they are far from completeness. Previous work such as TransE, TransH and TransR CTransR regard a relation as translation from head entity to tail entity and the CTransR achieves state-of-the-art performance. In this paper, we propose a more fine-grained model named TransD, which is an improvement of TransR CTransR. In TransD, we use two vectors to represent a named symbol object (entity and relation). The first one represents the meaning of a(n) entity (relation), the other one is used to construct mapping matrix dynamically. Compared with TransR CTransR, TransD not only considers the diversity of relations, but also entities. TransD has less parameters and has no matrix-vector multiplication operations, which makes it can be applied on large scale graphs. In Experiments, we evaluate our model on two typical tasks including triplets classification and link prediction. Evaluation results show that our approach outperforms stateof-the-art methods.",
"",
"Representation learning of knowledge bases aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction."
]
} |
1904.09540 | 2950216376 | During the past few decades, knowledge bases (KBs) have experienced rapid growth. Nevertheless, most KBs still suffer from serious incompletion. Researchers proposed many tasks such as knowledge base completion and relation prediction to help build the representation of KBs. However, there are some issues unsettled towards enriching the KBs. Knowledge base completion and relation prediction assume that we know two elements of the fact triples and we are going to predict the missing one. This assumption is too restricted in practice and prevents it from discovering new facts directly. To address this issue, we propose a new task, namely, fact discovery from knowledge base. This task only requires that we know the head entity and the goal is to discover facts associated with the head entity. To tackle this new problem, we propose a novel framework that decomposes the discovery problem into several facet discovery components. We also propose a novel auto-encoder based facet component to estimate some facets of the fact. Besides, we propose a feedback learning component to share the information between each facet. We evaluate our framework using a benchmark dataset and the experimental results show that our framework achieves promising results. We also conduct extensive analysis of our framework in discovering different kinds of facts. The source code of this paper can be obtained from this https URL. | Semantic matching models such as RESCAL @cite_20 , DistMult @cite_18 , Complex @cite_3 , HolE @cite_26 and ANALOGY @cite_23 model the score of triples by the semantic similarity. RESCAL simply models the score as a bilinear projection of head and tail entities. The bilinear projection is defined with a matrix for each relation. However, the huge amount of parameters makes the model prone to overfitting. To alleviate the issue of huge parameter space, DistMult is proposed to restrict the relation matrix to be diagonal. However, DistMult cannot handle the asymmetric relations. To tackle this problem, Complex is proposed assuming that the embeddings of entities and relations lie in the space of complex numbers. This model can handle the asymmetric relations. Later, Analogy is proposed by imposing restrictions on the matrix rather than building the matrix with vector. It achieves the state-of-the-art performance. Besides, @cite_15 @cite_14 @cite_30 @cite_1 @cite_17 @cite_7 conduct the semantic matching with neural networks. An energy function is used to jointly embed relations and entities. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_26",
"@cite_14",
"@cite_7",
"@cite_1",
"@cite_3",
"@cite_23",
"@cite_15",
"@cite_20",
"@cite_17"
],
"mid": [
"2964130576",
"2951077644",
"2145544171",
"2127426251",
"2303427901",
"68132019",
"2963432357",
"2964140943",
"2156954687",
"205829674",
"2016753842"
],
"abstract": [
"Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 75.8 .",
"We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning.",
"Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HOLE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator, HOLE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. Experimentally, we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction on knowledge graphs and relational learning benchmark datasets.",
"Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively.",
"In this paper, we propose a new deep learning approach, called neural association model (NAM), for probabilistic reasoning in artificial intelligence. We propose to use neural networks to model association between any two events in a domain. Neural networks take one event as input and compute a conditional probability of the other event to model how likely these two events are to be associated. The actual meaning of the conditional probabilities varies between applications and depends on how the models are trained. In this work, as two case studies, we have investigated two NAM structures, namely deep neural networks (DNN) and relation-modulated neural nets (RMNN), on several probabilistic reasoning tasks in AI, including recognizing textual entailment, triple classification in multi-relational knowledge bases and commonsense reasoning. Experimental results on several popular datasets derived from WordNet, FreeBase and ConceptNet have all demonstrated that both DNNs and RMNNs perform equally well and they can significantly outperform the conventional methods available for these reasoning tasks. Moreover, compared with DNNs, RMNNs are superior in knowledge transfer, where a pre-trained model can be quickly extended to an unseen relation after observing only a few training samples. To further prove the effectiveness of the proposed models, in this work, we have applied NAMs to solving challenging Winograd Schema (WS) problems. Experiments conducted on a set of WS problems prove that the proposed models have the potential for commonsense reasoning.",
"Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced. The network is trained to encode the semantics of these graphs in order to assign high probabilities to plausible components. We empirically show that it reaches competitive performance in link prediction on standard datasets from the literature as well as on data from a real-world knowledge base (WordNet). In addition, we present how our method can be applied to perform word-sense disambiguation in a context of open-text semantic parsing, where the goal is to learn to assign a structured meaning representation to almost any sentence of free text, demonstrating that it can scale up to tens of thousands of nodes and thousands of types of relation.",
"In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.",
"",
"",
"Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.",
"Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods."
]
} |
1904.09720 | 2939564021 | We present two new datasets and a novel attention mechanism for Natural Language Inference (NLI). Existing neural NLI models, even though when trained on existing large datasets, do not capture the notion of entity and role well and often end up making mistakes such as "Peter signed a deal" can be inferred from "John signed a deal". The two datasets have been developed to mitigate such issues and make the systems better at understanding the notion of "entities" and "roles". After training the existing architectures on the new dataset we observe that the existing architectures does not perform well on one of the new benchmark. We then propose a modification to the "word-to-word" attention function which has been uniformly reused across several popular NLI architectures. The resulting architectures perform as well as their unmodified counterparts on the existing benchmarks and perform significantly well on the new benchmark for "roles" and "entities". | Many large labelled NLI datasets have been released so far. develop the first large labelled NLI dataset containing @math premise-hypothesis pairs. They show sample image captions to crowd-workers and the label (entailment, contradiction and neutral) and ask workers to write down a hypothesis for each of those three scenarios. As a result they obtain a high agreement entailment dataset known as Stanford Natural Language Inference (SNLI). Since premises in SNLI contains only image captions it might contain sentences of limited genres. MultiNLI @cite_9 have been developed to address this issue. Unlike SNLI and MultiNLI, @cite_15 and @cite_21 considers multiple-choice question-answering as an NLI task to create the SciTail @cite_15 and QNLI @cite_21 datasets respectively. Recent datasets like PAWS @cite_16 which is a paraphrase identification dataset also helps to advance the field of NLI. creates a NLI test set which shows the inability of the current state of the art systems to accurately perform inference requiring lexical and world knowledge. | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_16",
"@cite_15"
],
"mid": [
"2607892599",
"2891308403",
"2954508062",
"2788496822"
],
"abstract": [
"This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.",
"Existing datasets for natural language inference (NLI) have propelled research on language understanding. We propose a new method for automatically deriving NLI datasets from the growing abundance of large-scale question answering datasets. Our approach hinges on learning a sentence transformation model which converts question-answer pairs into their declarative forms. Despite being primarily trained on a single QA dataset, we show that it can be successfully applied to a variety of other QA resources. Using this system, we automatically derive a new freely available dataset of over 500k NLI examples (QA-NLI), and show that it exhibits a wide range of inference phenomena rarely seen in previous NLI datasets.",
"Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like flights from New York to Florida and flights from Florida to New York. This paper introduces PAWS (Paraphrase Adversaries from Word Scrambling), a new dataset with 108,463 well-formed paraphrase and non-paraphrase pairs with high lexical overlap. Challenging pairs are generated by controlled word swapping and back translation, followed by fluency and paraphrase judgments by human raters. State-of-the-art models trained on existing datasets have dismal performance on PAWS (<40 accuracy); however, including PAWS training data for these models improves their accuracy to 85 while maintaining performance on existing tasks. In contrast, models that do not capture non-local contextual information fail even with PAWS training examples. As such, PAWS provides an effective instrument for driving further progress on models that better exploit structure, context, and pairwise comparisons.",
""
]
} |
1904.09720 | 2939564021 | We present two new datasets and a novel attention mechanism for Natural Language Inference (NLI). Existing neural NLI models, even though when trained on existing large datasets, do not capture the notion of entity and role well and often end up making mistakes such as "Peter signed a deal" can be inferred from "John signed a deal". The two datasets have been developed to mitigate such issues and make the systems better at understanding the notion of "entities" and "roles". After training the existing architectures on the new dataset we observe that the existing architectures does not perform well on one of the new benchmark. We then propose a modification to the "word-to-word" attention function which has been uniformly reused across several popular NLI architectures. The resulting architectures perform as well as their unmodified counterparts on the existing benchmarks and perform significantly well on the new benchmark for "roles" and "entities". | Since the release of such large data sets, many advanced deep learning architectures have been developed @cite_7 @cite_3 @cite_12 @cite_0 @cite_1 @cite_4 @cite_18 @cite_17 @cite_13 @cite_14 @cite_20 @cite_6 @cite_8 @cite_15 @cite_19 @cite_10 . Although many of these deep learning models achieve close to human level performance on SNLI and MultiNLI datasets, these models can be easily deceived by simple adversarial examples. shows how simple linguistic variations such as negation or re-ordering of words deceives the DecAtt Model. goes on to show that this failure is attributed to the bias created as a result of crowd sourcing. They observe that crowd sourcing generates hypothesis that contain certain patterns that could help a classifier learn without the need to observe the premise at all. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_10",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_15",
"@cite_20",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2221711388",
"2496570145",
"2118463056",
"2308720496",
"2523467643",
"2914526845",
"2741420051",
"2172888184",
"2790415926",
"2415204069",
"2963341956",
"2788496822",
"2576562514",
"2413794162",
"2275485090",
"2267186426"
],
"abstract": [
"Natural language inference (NLI) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate learning-centered methods such as deep neural networks for natural language inference (NLI). In this paper, we propose a special long short-term memory (LSTM) architecture for NLI. Our model builds on top of a recently proposed neural attention model for NLI but is based on a significantly different idea. Instead of deriving sentence embeddings for the premise and the hypothesis to be used for classification, our solution uses a match-LSTM to perform word-by-word matching of the hypothesis with the premise. This LSTM is able to place more emphasis on important word-level matching results. In particular, we observe that this LSTM remembers important mismatches that are critical for predicting the contradiction or the neutral relationship label. On the SNLI corpus, our model achieves an accuracy of 86.1 , outperforming the state of the art.",
"Neural networks with recurrent or recursive architecture have shown promising results on various natural language processing (NLP) tasks. The recurrent and recursive architectures have their own strength and limitations. The recurrent networks process input text sequentially and model the conditional transition between word tokens. In contrast, the recursive networks explicitly model the compositionality and the recursive structure of natural language. Current recursive architecture is based on syntactic tree, thus limiting its practical applicability in different NLP applications. In this paper, we introduce a class of tree structured model, Neural Tree Indexers (NTI) that provides a middle ground between the sequential RNNs and the syntactic tree-based recursive models. NTI constructs a full n-ary tree by processing the input text with its node function in a bottom-up fashion. Attention mechanism can then be applied to both structure and different forms of node function. We demonstrated the effectiveness and the flexibility of a binary-tree model of NTI, showing the model achieved the state-of-the-art performance on three different NLP tasks: natural language inference, answer sentence selection, and sentence classification.",
"While most approaches to automatically recognizing entailment relations have used classifiers employing hand engineered features derived from complex natural language processing pipelines, in practice their performance has been only slightly better than bag-of-word pair classifiers using only lexical similarity. The only attempt so far to build an end-to-end differentiable neural network for entailment failed to outperform such a simple similarity classifier. In this paper, we propose a neural model that reads two sentences to determine entailment using long short-term memory units. We extend this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases. Furthermore, we present a qualitative analysis of attention weights produced by this model, demonstrating such reasoning capabilities. On a large entailment dataset this model outperforms the previous best neural model and a classifier with engineered features by a substantial margin. It is the first generic end-to-end differentiable system that achieves state-of-the-art accuracy on a textual entailment dataset.",
"Tree-structured neural networks exploit valuable syntactic parse information as they interpret the meanings of sentences. However, they suer from two key technical problems that make them slow and unwieldyforlarge-scaleNLPtasks: theyusually operate on parsed sentences and they do not directly support batched computation. We address these issues by introducingtheStack-augmentedParser-Interpreter NeuralNetwork(SPINN),whichcombines parsing and interpretation within a single tree-sequence hybrid model by integrating tree-structured sentence interpretation into the linear sequential structure of a shiftreduceparser. Ourmodelsupportsbatched computation for a speedup of up to 25◊ over other tree-structured models, and its integrated parser can operate on unparsed data with little loss in accuracy. We evaluate it on the Stanford NLI entailment task and show that it significantly outperforms other sentence-encoding models.",
"Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is notoriously challenging but is fundamental to natural language understanding and many applications. With the availability of large annotated data, neural network models have recently advanced the field significantly. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.3 on the standard benchmark, the Stanford Natural Language Inference dataset. This result is achieved first through our enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures, suggesting that the potential of sequential LSTM-based models have not been fully explored yet in previous work. We further show that by explicitly considering recursive architectures, we achieve additional improvement. Particularly, incorporating syntactic parse information contributes to our best result; it improves the performance even when the parse information is added to an already very strong system.",
"In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks. MT-DNN not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations in order to adapt to new tasks and domains. MT-DNN extends the model proposed in (2015) by incorporating a pre-trained bidirectional transformer language model, known as BERT (, 2018). MT-DNN obtains new state-of-the-art results on ten NLU tasks, including SNLI, SciTail, and eight out of nine GLUE tasks, pushing the GLUE benchmark to 82.7 (2.2 absolute improvement). We also demonstrate using the SNLI and SciTail datasets that the representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations. The code and pre-trained models are publicly available at this https URL.",
"",
"Hypernymy, textual entailment, and image captioning can be seen as special cases of a single visual-semantic hierarchy over words, sentences, and images. In this paper we advocate for explicitly modeling the partial order structure of this hierarchy. Towards this goal, we introduce a general method for learning ordered representations, and show how it can be applied to a variety of tasks involving images and language. We show that the resulting representations improve performance over current approaches for hypernym prediction and image-caption retrieval.",
"Large-scale datasets for natural language inference are created by presenting crowd workers with a sentence (premise), and asking them to generate three new sentences (hypotheses) that it entails, contradicts, or is logically neutral with respect to. We show that, in a significant portion of such data, this protocol leaves clues that make it possible to identify the label by looking only at the hypothesis, without observing the premise. Specifically, we show that a simple text categorization model can correctly classify the hypothesis alone in about 67 of SNLI (Bowman et. al, 2015) and 53 of MultiNLI (Williams et. al, 2017). Our analysis reveals that specific linguistic phenomena such as negation and vagueness are highly correlated with certain inference classes. Our findings suggest that the success of natural language inference models to date has been overestimated, and that the task remains a hard open problem.",
"In this paper, we proposed a sentence encoding-based model for recognizing text entailment. In our approach, the encoding of sentence is a two-stage process. Firstly, average pooling was used over word-level bidirectional LSTM (biLSTM) to generate a first-stage sentence representation. Secondly, attention mechanism was employed to replace average pooling on the same sentence for better representations. Instead of using target sentence to attend words in source sentence, we utilized the sentence's first-stage representation to attend words appeared in itself, which is called \"Inner-Attention\" in our paper . Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus has proved the effectiveness of \"Inner-Attention\" mechanism. With less number of parameters, our model outperformed the existing best sentence encoding-based approach by a large margin.",
"",
"",
"",
"",
"",
""
]
} |
1904.09585 | 2936905986 | The goal of homomorphic encryption is to encrypt data such that another party can operate on it without being explicitly exposed to the content of the original data. We introduce an idea for a privacy-preserving transformation on natural language data, inspired by homomorphic encryption. Our primary tool is obfuscation , relying on the properties of natural language. Specifically, a given text is obfuscated using a neural model that aims to preserve the syntactic relationships of the original sentence so that the obfuscated sentence can be parsed instead of the original one. The model works at the word level, and learns to obfuscate each word separately by changing it into a new word that has a similar syntactic role. The text encrypted by our model leads to better performance on three syntactic parsers (two dependency and one constituency parsers) in comparison to a strong random baseline. The substituted words have similar syntactic properties, but different semantic content, compared to the original words. | Another actively researched field is differential privacy @cite_5 , which also regards data privacy protection and data processing elsewhere. The main purpose of differential privacy is to enable distribution of the data as a training dataset while at the same time protecting individuals from being identified based on their records in the dataset. There is also recent research that brings differential privacy into natural language processing such as @cite_6 , which targets the removal of authorship identity using differential privacy and the bag-of-words privacy mechanism. | {
"cite_N": [
"@cite_5",
"@cite_6"
],
"mid": [
"2109426455",
"2901147448"
],
"abstract": [
"Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.",
"We address the problem of how to “obfuscate” texts by removing stylistic clues which can identify authorship, whilst preserving (as much as possible) the content of the text. In this paper we combine ideas from “generalised differential privacy” and machine learning techniques for text processing to model privacy for text documents. We define a privacy mechanism that operates at the level of text documents represented as “bags-of-words”—these representations are typical in machine learning and contain sufficient information to carry out many kinds of classification tasks including topic identification and authorship attribution (of the original documents). We show that our mechanism satisfies privacy with respect to a metric for semantic similarity, thereby providing a balance between utility, defined by the semantic content of texts, with the obfuscation of stylistic clues. We demonstrate our implementation on a “fan fiction” dataset, confirming that it is indeed possible to disguise writing style effectively whilst preserving enough information and variation for accurate content classification tasks. We refer the reader to our complete paper [15] which contains full proofs and further experimentation details."
]
} |
1904.09585 | 2936905986 | The goal of homomorphic encryption is to encrypt data such that another party can operate on it without being explicitly exposed to the content of the original data. We introduce an idea for a privacy-preserving transformation on natural language data, inspired by homomorphic encryption. Our primary tool is obfuscation , relying on the properties of natural language. Specifically, a given text is obfuscated using a neural model that aims to preserve the syntactic relationships of the original sentence so that the obfuscated sentence can be parsed instead of the original one. The model works at the word level, and learns to obfuscate each word separately by changing it into a new word that has a similar syntactic role. The text encrypted by our model leads to better performance on three syntactic parsers (two dependency and one constituency parsers) in comparison to a strong random baseline. The substituted words have similar syntactic properties, but different semantic content, compared to the original words. | With homomorphic encryption long being an important topic in cryptography, it has been borrowed into the field of privacy-preservation in machine learning, particularly in terms of designing neural networks which enable homomorphic operations over encrypted data @cite_9 @cite_0 . For example, designed a fully homomorphic encrypted convolutional neural network that was able to solve the MNIST dataset with practical efficiency and accuracy. However, since the schema of direct homomorphic encryption is not perfect, the constraint of multiplication depth makes deep models intractable, and to the best of our knowledge, no prior work has demonstrated that homomorphic encryption could be directly applied to the design of recurrent neural networks or discrete tokens as input. | {
"cite_N": [
"@cite_0",
"@cite_9"
],
"mid": [
"2794888826",
"2768347741"
],
"abstract": [
"The rise of machine learning as a service multiplies scenarios where one faces a privacy dilemma: either sensitive user data must be revealed to the entity that evaluates the cognitive model (e.g., in the Cloud), or the model itself must be revealed to the user so that the evaluation can take place locally. Fully Homomorphic Encryption (FHE) offers an elegant way to reconcile these conflicting interests in the Cloud-based scenario and also preserve non-interactivity. However, due to the inefficiency of existing FHE schemes, most applications prefer to use Somewhat Homomorphic Encryption (SHE), where the complexity of the computation to be performed has to be known in advance, and the efficiency of the scheme depends on this global complexity.",
"Machine learning algorithms based on deep neural networks have achieved remarkable results and are being extensively used in different domains. However, the machine learning algorithms requires access to raw data which is often privacy sensitive. To address this issue, we develop new techniques to provide solutions for running deep neural networks over encrypted data. In this paper, we develop new techniques to adopt deep neural networks within the practical limitation of current homomorphic encryption schemes. More specifically, we focus on classification of the well-known convolutional neural networks (CNN). First, we design methods for approximation of the activation functions commonly used in CNNs (i.e. ReLU, Sigmoid, and Tanh) with low degree polynomials which is essential for efficient homomorphic encryption schemes. Then, we train convolutional neural networks with the approximation polynomials instead of original activation functions and analyze the performance of the models. Finally, we implement convolutional neural networks over encrypted data and measure performance of the models. Our experimental results validate the soundness of our approach with several convolutional neural networks with varying number of layers and structures. When applied to the MNIST optical character recognition tasks, our approach achieves 99.52 accuracy which significantly outperforms the state-of-the-art solutions and is very close to the accuracy of the best non-private version, 99.77 . Also, it can make close to 164000 predictions per hour. We also applied our approach to CIFAR-10, which is much more complex compared to MNIST, and were able to achieve 91.5 accuracy with approximation polynomials used as activation functions. These results show that CryptoDL provides efficient, accurate and scalable privacy-preserving predictions."
]
} |
1904.09751 | 2938704169 | Despite considerable advancements with deep neural language models, the enigma of neural text degeneration persists when these models are tested as text generators. The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, using likelihood as a decoding objective leads to text that is bland and strangely repetitive. In this paper, we reveal surprising distributional differences between human text and machine text. In addition, we find that decoding strategies alone can dramatically effect the quality of machine text, even when generated from exactly the same neural language model. Our findings motivate Nucleus Sampling, a simple but effective method to draw the best out of neural generation. By sampling text from the dynamic nucleus of the probability distribution, which allows for diversity while effectively truncating the less reliable tail of the distribution, the resulting text better demonstrates the quality of human text, yielding enhanced diversity without sacrificing fluency and coherence. | One of the most prominent recent research directions in open-ended text generation has been using generative adversarial networks [GANs;][] Yu2017SeqGAN,xu2018diversity . A number of metrics (based on BLEU and cross entropy) have been proposed to quantify the diversity and quality of open-ended generations @cite_5 @cite_22 @cite_8 . However, these evaluations were usually performed for sentence generation, while we focused on generating larger coherent text passages. Recent work has shown that when both quality and diversity is considered, GAN-generated text is substantially worse than language model generations @cite_5 @cite_2 @cite_9 . | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_9",
"@cite_2",
"@cite_5"
],
"mid": [
"2963456134",
"2799184518",
"2807747378",
"2963595537",
"2900260828"
],
"abstract": [
"We introduce Texygen, a benchmarking platform to support research on open-domain text generation models. Texygen has not only implemented a majority of text generation models, but also covered a set of metrics that evaluate the diversity, the quality and the consistency of the generated texts. The Texygen platform could help standardize the research on text generation and improve the reproductivity and reliability of future research work in text generation.",
"In this paper, we study recent neural generative models for text generation related to variational autoencoders. Previous works have employed various techniques to control the prior distribution of the latent codes in these models, which is important for sampling performance, but little attention has been paid to reconstruction error. In our study, we follow a rigorous evaluation protocol using a large set of previously used and novel automatic and human evaluation metrics, applied to both generated samples and reconstructions. We hope that it will become the new evaluation standard when comparing neural generative models for text.",
"Generative Adversarial Networks (GANs) are a promising approach to language generation. The latest works introducing novel GAN models for language generation use n-gram based metrics for evaluation and only report single scores of the best run. In this paper, we argue that this often misrepresents the true picture and does not tell the full story, as GAN models can be extremely sensitive to the random initialization and small deviations from the best hyperparameter choice. In particular, we demonstrate that the previously used BLEU score is not sensitive to semantic deterioration of generated texts and propose alternative metrics that better capture the quality and diversity of the generated samples. We also conduct a set of experiments comparing a number of GAN models for text with a conventional Language Model (LM) and find that neither of the considered models performs convincingly better than the LM.",
"",
"Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks. Maximum-Likelihood (MLE) models trained with teacher forcing have constantly been reported as weak baselines, where poor performance is attributed to exposure bias; at inference time, the model is fed its own prediction instead of a ground-truth token, which can lead to accumulating errors and poor samples. This line of reasoning has led to an outbreak of adversarial based approaches for NLG, on the account that GANs do not suffer from exposure bias. In this work, we make several surprising observations with contradict common beliefs. We first revisit the canonical evaluation framework for NLG, and point out fundamental flaws with quality-only evaluation: we show that one can outperform such metrics using a simple, well-known temperature parameter to artificially reduce the entropy of the model's conditional distributions. Second, we leverage the control over the quality diversity tradeoff given by this parameter to evaluate models over the whole quality-diversity spectrum, and find MLE models constantly outperform the proposed GAN variants, over the whole quality-diversity space. Our results have several implications: 1) The impact of exposure bias on sample quality is less severe than previously thought, 2) temperature tuning provides a better quality diversity trade off than adversarial training, while being easier to train, easier to cross-validate, and less computationally expensive."
]
} |
1904.09709 | 2952041458 | Arbitrary attribute editing generally can be tackled by incorporating encoder-decoder and generative adversarial networks. However, the bottleneck layer in encoder-decoder usually gives rise to blurry and low quality editing result. And adding skip connections improves image quality at the cost of weakened attribute manipulation ability. Moreover, existing methods exploit target attribute vector to guide the flexible translation to desired target domain. In this work, we suggest to address these issues from selective transfer perspective. Considering that specific editing task is certainly only related to the changed attributes instead of all target attributes, our model selectively takes the difference between target and source attribute vectors as input. Furthermore, selective transfer units are incorporated with encoder-decoder to adaptively select and modify encoder feature for enhanced attribute editing. Experiments show that our method (i.e., STGAN) simultaneously improves attribute manipulation accuracy as well as perception quality, and performs favorably against state-of-the-arts in arbitrary facial attribute editing and season translation. | In their pioneer work @cite_2 , Hinton and Zemel proposed an autoencoder network, which consists of an encoder to map the input into and a decoder to recover from the . Subsequently, denoising autoencoders @cite_30 are presented to learn representation robust to partial corruption. Kingma and Welling @cite_25 suggested a Variational Autoencoder (VAE), which validates the feasibility of encoder-decoder architecture to generate unseen images. Recent studies show that skip connections @cite_31 @cite_16 between encoder and decoder layers usually benefit the training stability and visual quality of generated images. However, as discussed in Sec. , skip connections actually improves image quality at the cost of weakened attribute manipulation ability, and should be carefully used in arbitrary attribute editing. | {
"cite_N": [
"@cite_30",
"@cite_2",
"@cite_31",
"@cite_16",
"@cite_25"
],
"mid": [
"2025768430",
"2102409316",
"1901129140",
"2963073614",
""
],
"abstract": [
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.",
"An autoencoder network uses a set of recognition weights to convert an input vector into a code vector. It then uses a set of generative weights to convert the code vector into an approximate reconstruction of the input vector. We derive an objective function for training autoencoders based on the Minimum Description Length (MDL) principle. The aim is to minimize the information required to describe both the code vector and the reconstruction error. We show that this information is minimized by choosing code vectors stochastically according to a Boltzmann distribution, where the generative weights define the energy of each possible code vector given the input vector. Unfortunately, if the code vectors use distributed representations, it is exponentially expensive to compute this Boltzmann distribution because it involves all possible code vectors. We show that the recognition weights of an autoencoder can be used to compute an approximation to the Boltzmann distribution and that this approximation gives an upper bound on the description length. Even when this bound is poor, it can be used as a Lyapunov function for learning both the generative and the recognition weights. We demonstrate that this approach can be used to learn factorial codes.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.",
""
]
} |
1904.09709 | 2952041458 | Arbitrary attribute editing generally can be tackled by incorporating encoder-decoder and generative adversarial networks. However, the bottleneck layer in encoder-decoder usually gives rise to blurry and low quality editing result. And adding skip connections improves image quality at the cost of weakened attribute manipulation ability. Moreover, existing methods exploit target attribute vector to guide the flexible translation to desired target domain. In this work, we suggest to address these issues from selective transfer perspective. Considering that specific editing task is certainly only related to the changed attributes instead of all target attributes, our model selectively takes the difference between target and source attribute vectors as input. Furthermore, selective transfer units are incorporated with encoder-decoder to adaptively select and modify encoder feature for enhanced attribute editing. Experiments show that our method (i.e., STGAN) simultaneously improves attribute manipulation accuracy as well as perception quality, and performs favorably against state-of-the-arts in arbitrary facial attribute editing and season translation. | GAN @cite_9 @cite_38 is originally proposed to generate images from random noise, and generally consists of a generator and a discriminator which are trained in an adversarial manner and suffer from the mode collapse problem. Recently, enormous efforts have been devoted to improving the stability of learning. @cite_23 @cite_32 , Wasserstein-1 distance and gradient penalty are suggested to improve stability of the optimization process. @cite_12 , the VAE decoder and GAN generator are collapsed into one model and optimized by both reconstruction and adversarial loss. Conditional GAN (cGAN) @cite_0 @cite_16 takes conditional variable as input to the generator and discriminator to generate image with desired properties. As a result, GAN has become one of the most prominent models for versatile image generation @cite_9 @cite_38 , translation @cite_16 @cite_26 , restoration @cite_19 @cite_27 and editing @cite_28 tasks. | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_28",
"@cite_9",
"@cite_32",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_16",
"@cite_12"
],
"mid": [
"",
"2962793481",
"2963420272",
"",
"2962879692",
"2125389028",
"2963470893",
"2963089432",
"",
"2963073614",
"2964167449"
],
"abstract": [
"",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
"",
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.",
"This paper studies the problem of blind face restoration from an unconstrained blurry, noisy, low-resolution, or compressed image (i.e., degraded observation). For better recovery of fine facial details, we modify the problem setting by taking both the degraded observation and a high-quality guided image of the same identity as input to our guided face restoration network (GFRNet). However, the degraded observation and guided image generally are different in pose, illumination and expression, thereby making plain CNNs (e.g., U-Net) fail to recover fine and identity-aware facial details. To tackle this issue, our GFRNet model includes both a warping subnetwork (WarpNet) and a reconstruction subnetwork (RecNet). The WarpNet is introduced to predict flow field for warping the guided image to correct pose and expression (i.e., warped guidance), while the RecNet takes the degraded observation and warped guidance as input to produce the restoration result. Due to that the ground-truth flow field is unavailable, landmark loss together with total variation regularization are incorporated to guide the learning of WarpNet. Furthermore, to make the model applicable to blind restoration, our GFRNet is trained on the synthetic data with versatile settings on blur kernel, noise level, downsampling scale factor, and JPEG quality factor. Experiments show that our GFRNet not only performs favorably against the state-of-the-art image and face restoration methods, but also generates visually photo-realistic results on real degraded facial images.",
"",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.",
"We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder (VAE) with a generative adversarial network (GAN) we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic."
]
} |
1904.09709 | 2952041458 | Arbitrary attribute editing generally can be tackled by incorporating encoder-decoder and generative adversarial networks. However, the bottleneck layer in encoder-decoder usually gives rise to blurry and low quality editing result. And adding skip connections improves image quality at the cost of weakened attribute manipulation ability. Moreover, existing methods exploit target attribute vector to guide the flexible translation to desired target domain. In this work, we suggest to address these issues from selective transfer perspective. Considering that specific editing task is certainly only related to the changed attributes instead of all target attributes, our model selectively takes the difference between target and source attribute vectors as input. Furthermore, selective transfer units are incorporated with encoder-decoder to adaptively select and modify encoder feature for enhanced attribute editing. Experiments show that our method (i.e., STGAN) simultaneously improves attribute manipulation accuracy as well as perception quality, and performs favorably against state-of-the-arts in arbitrary facial attribute editing and season translation. | Image-to-image translation aims at learning cross-domain mapping in supervised or unsupervised settings. Isola al @cite_16 presented a unified pix2pix framework for learning image-to-image translation from paired data. Improved network architectures, , cascaded refinement networks @cite_18 and pix2pixHD @cite_13 , are then developed to improve the visual quality of synthesized images. As for unpaired image-to-image translation, additional constraints, , cycle consistency @cite_26 and shared latent space @cite_33 , are suggested to alleviate the inherent ill-posedness of the task. Nonetheless, arbitrary attribute editing actually is a multi-domain image-to-image translation problem, and cannot be solved with scalability by aforementioned methods. To address this issue, @cite_4 and @cite_15 decouple generators by learning domain-specific encoders decoders with shared latent space, but are still limited in scaling to change multiple attributes of an image. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_33",
"@cite_15",
"@cite_16",
"@cite_13"
],
"mid": [
"2963522749",
"2962793481",
"2963344645",
"2962947361",
"2774003717",
"2963073614",
"2963800363"
],
"abstract": [
"We present an approach to synthesizing photographic images conditioned on semantic layouts. Given a semantic label map, our approach produces an image with photographic appearance that conforms to the input layout. The approach thus functions as a rendering engine that takes a two-dimensional semantic specification of the scene and produces a corresponding photographic image. Unlike recent and contemporaneous work, our approach does not rely on adversarial training. We show that photographic images can be synthesized from semantic layouts by a single feedforward network with appropriate structure, trained end-to-end with a direct regression objective. The presented approach scales seamlessly to high resolutions; we demonstrate this by synthesizing photographic images at 2-megapixel resolution, the full resolution of our training data. Extensive perceptual experiments on datasets of outdoor and indoor scenes demonstrate that images synthesized by the presented approach are considerably more realistic than alternative approaches.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"The past year alone has seen unprecedented leaps in the area of learning-based image translation, namely Cycle-GAN, by But experiments so far have been tailored to merely two domains at a time, and scaling them to more would require an quadratic number of models to be trained. And with two-domain models taking days to train on current hardware, the number of domains quickly becomes limited by the time and resources required to process them. In this paper, we propose a multi-component image translation model and training scheme which scales linearly - both in resource consumption and time required - with the number of domains. We demonstrate its capabilities on a dataset of paintings by 14 different artists and on images of the four different seasons in the Alps. Note that 14 data groups would need (14 choose 2) = 91 different CycleGAN models: a total of 182 generator discriminator pairs; whereas our model requires only 14 generator discriminator pairs.",
"Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in https: github.com mingyuliutw unit.",
"Unsupervised Image-to-Image Translation achieves spectacularly advanced developments nowadays. However, recent approaches mainly focus on one model with two domains, which may face heavy burdens with large cost of @math training time and model parameters, under such a requirement that @math domains are freely transferred to each other in a general setting. To address this problem, we propose a novel and unified framework named Domain-Bank, which consists of a global shared auto-encoder and @math domain-specific encoders decoders, assuming that a universal shared-latent sapce can be projected. Thus, we yield @math complexity in model parameters along with a huge reduction of the time budgets. Besides the high efficiency, we show the comparable (or even better) image translation results over state-of-the-arts on various challenging unsupervised image translation tasks, including face image translation, fashion-clothes translation and painting style translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on digit benchmark datasets. Further, thanks to the explicit representation of the domain-specific decoders as well as the universal shared-latent space, it also enables us to conduct incremental learning to add a new domain encoder decoder. Linear combination of different domains' representations is also obtained by fusing the corresponding decoders.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.",
"We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 A— 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing."
]
} |
1904.09539 | 2937465597 | Achieving high throughput and reliability in underwater acoustic networks for transmitting distributed and large volume of data is a challenging task due to the bandwidth-limited and unpredictable nature of the acoustic channel. In a multi-node network, such as in the Internet of Underwater Things (IoUT), communication link efficiency varies dynamically: if the channel is not in good condition, e.g., when in deep fade, channel coding techniques may fail to deliver the information even with multiple retransmissions. Hence, an efficient and agile collaborative strategy is required to allocate appropriate resources to the communication links based on their status. The proposed solution adjusts the physical and link-layer parameters collaboratively for a Code Division Multiple Access (CDMA)-based underwater network. An adaptive Hybrid Automatic Repeat Request (HARQ) solution is employed to guarantee reliable communications against errors in poor links. Results were validated using data collected from the LOON testbed-hosted at the NATO STO Centre for Maritime Research and Experimentation (CMRE) in La Spezia, Italy-and from the REP18-Atlantic sea trial conducted in Sept'18 in Portuguese water. | In practical experiments---when usually the channel is error prone and therefore unreliable---multiple rounds of retransmissions should be performed to deliver the intended data; consequently, a huge amount of time is wasted given the long propagation delay in the underwater channel. Therefore, a proper combination of the ARQ and FEC is required in an efficient scheme to overcome the mentioned problems. This combination of ARQ and FEC leads to a Hybrid approach, i.e., HARQ, which reduces the number of packet retransmissions and increases the system reliability, specially under poor channel conditions. If the data is not decodable, the receiver sends back a Negative Acknowledgement (NACK) to the transmitter and asks for additional duplicated FEC, which eventually increases the probability of successful transmission @cite_18 . However, if the channel is very noisy, even using multiple retransmissions may not work. In the truncated ARQ HARQ, the number of retransmissions is limited. Therefore, the receiver might drop the data, which detrimentally affects the throughput of the network. | {
"cite_N": [
"@cite_18"
],
"mid": [
"80633931"
],
"abstract": [
"In mobile packet data and voice networks, a special coding scheme, known as the incremental redundancy hybrid ARQ (IR HARQ), achieves higher throughput efficiency than ordinary turbo codes by adapting its error correcting code redundancy to fluctuating channel conditions characteristic for this application. An IR HARQ protocol operates as follows. Initially, the information bits are encoded by a “mother” code and a selected number of parity bits are transmitted. If a retransmission is requested, only additional selected parity bits are transmitted. At the receiving end, the additional parity bits are combined with the previously received bits, allowing for an increase in the error correction capacity. This procedure is repeated after each subsequent retransmission request until the entire codeword of the mother code is transmitted. A number of important issues such as error rate performance after each transmission on time varying channels, and rate and power control are difficult to analyze in a network employing a particular HARQ scheme, i.e., a given mother code and given selection of bits for each transmission. By relaxing only the latter constraint, namely, by allowing random selection of the bits for each transmission, we provide very good estimates of error rates allowing us to address to a certain extent the rate and power control problem."
]
} |
1904.09539 | 2937465597 | Achieving high throughput and reliability in underwater acoustic networks for transmitting distributed and large volume of data is a challenging task due to the bandwidth-limited and unpredictable nature of the acoustic channel. In a multi-node network, such as in the Internet of Underwater Things (IoUT), communication link efficiency varies dynamically: if the channel is not in good condition, e.g., when in deep fade, channel coding techniques may fail to deliver the information even with multiple retransmissions. Hence, an efficient and agile collaborative strategy is required to allocate appropriate resources to the communication links based on their status. The proposed solution adjusts the physical and link-layer parameters collaboratively for a Code Division Multiple Access (CDMA)-based underwater network. An adaptive Hybrid Automatic Repeat Request (HARQ) solution is employed to guarantee reliable communications against errors in poor links. Results were validated using data collected from the LOON testbed-hosted at the NATO STO Centre for Maritime Research and Experimentation (CMRE) in La Spezia, Italy-and from the REP18-Atlantic sea trial conducted in Sept'18 in Portuguese water. | A type-I HARQ discards the erroneous received packet after a failed attempt to correct it, then the transmitter repeats the same packet until the error is corrected. This method might be inefficient in time-varying underwater acoustic channel. When the channel is in good condition, i.e., retransmission is not required, FEC information is more than it requires and so the throughput drops. On the other hand, if the channel is not in good condition, e.g., when in deep fade, the pre-defined FEC might not be adequate and the throughput drops again because of multiple retransmissions @cite_3 . | {
"cite_N": [
"@cite_3"
],
"mid": [
"2117197970"
],
"abstract": [
"Acoustic modems typically operate in half-duplex, which limits the choice of a data link control protocol to the Stop and Wait (S&W) type. Unfortunately, on channels with poor quality and long propagation delay-such as the majority of acoustic channels-S&W protocol has low throughput efficiency. The basic S&W can be improved by using a modification in which packets are transmitted in groups and acknowledged selectively. Throughput efficiency can now be maximized by selecting the optimal packet size, which is a function of range, rate, and error probability. Quantitative analysis for typical acoustic links shows that modified S&W protocols offer good performance, provided that packet size is chosen close to optimal. In addition, as the group size increases, sensitivity to packet size selection is reduced. To ensure best ARQ performance in mobile acoustic systems where link conditions vary with time, future generation of acoustic modems must focus on adaptive selection of protocol parameters."
]
} |
1904.09539 | 2937465597 | Achieving high throughput and reliability in underwater acoustic networks for transmitting distributed and large volume of data is a challenging task due to the bandwidth-limited and unpredictable nature of the acoustic channel. In a multi-node network, such as in the Internet of Underwater Things (IoUT), communication link efficiency varies dynamically: if the channel is not in good condition, e.g., when in deep fade, channel coding techniques may fail to deliver the information even with multiple retransmissions. Hence, an efficient and agile collaborative strategy is required to allocate appropriate resources to the communication links based on their status. The proposed solution adjusts the physical and link-layer parameters collaboratively for a Code Division Multiple Access (CDMA)-based underwater network. An adaptive Hybrid Automatic Repeat Request (HARQ) solution is employed to guarantee reliable communications against errors in poor links. Results were validated using data collected from the LOON testbed-hosted at the NATO STO Centre for Maritime Research and Experimentation (CMRE) in La Spezia, Italy-and from the REP18-Atlantic sea trial conducted in Sept'18 in Portuguese water. | Using numerical simulations, authors in @cite_14 used the random linear packet coding to control the packet loss in a hierarchical definition of packets in the stop &wait ARQ protocol for the channels with a long propagation delay. @cite_28 , the authors applied fountain codes to HARQ in underwater networks to reduce retransmissions and achieve optimal broadcasting policies. An adaptive coding approach based on the IR-HARQ was proposed in @cite_15 to improve the packet error rate in a time-slotted underwater acoustic network. @cite_27 , we proposed a scheme based on HARQ that exploits the diversity gain offered by independent links of an underwater acoustic Multiple Input Multiple Output (MIMO) channel. A large number of papers can be found in the literature that investigate the efficiency of point-to-point HARQ, especially in the terrestrial environment. | {
"cite_N": [
"@cite_28",
"@cite_27",
"@cite_15",
"@cite_14"
],
"mid": [
"2057495177",
"2327828810",
"2092576738",
"2884551761"
],
"abstract": [
"This paper explores hybrid ARQ policies based on fountain codes for the transmission of multicast messages in underwater channels. These rateless codes are considered because of two nice properties, namely, they are computationally lightweight and do not require to know the channel erasure probabilities at the receivers prior to transmission. In this paper, these codes are used together with a Stop and Wait ARQ to enhance the performance of broadcast communications. First, we present a dynamic programming model for the characterization of optimal broadcasting policies. The derived broadcasting rules are then compared against plain ARQ schemes via Monte-Carlo simulation. Our results show that digital fountain codes are a promising technique for the transmission over underwater channels as their performance, in terms of delay, reliability and energy efficiency, clearly dominates that of plain ARQ solutions. This paper is a preliminary study on the topic and encourages us towards the design of practical HARQ protocols for the underwater medium.",
"Achieving robust and reliable data communications in the harsh underwater environment is still a challenging issue due to the fast-changing and unpredictable nature of the acoustic channel. Notwithstanding the use of acknowledgement messages at the link layer, data transmission protocols still require more efficient error control strategies so to achieve a higher link reliability. A novel solution based on hybrid automatic repeat request (HARQ) is proposed that exploits the diversity gain offered by independent links in an underwater acoustic Multiple Input Multiple Output (MIMO) system. Irrespective of the channel regime, the proposed scheme aims at reducing the probability of retransmission and at increasing the link reliability via packet-level codeword selection and waterfilling coding. Encoding and decoding algorithms at the transmitter and receiver, respectively, are designed, multiple evaluation metrics are defined, and computer-based simulation results are presented to quantify the performance improvement of the proposed methods.",
"Underwater acoustic communication networks (UWANs) have recently attracted much attention in the research community. Two properties that set UWANs apart from most radio-frequency wireless communication networks are the long propagation delay and the possible sparsity of the network topology. This in turn offers opportunities to optimize throughput through time and spatial reuse. In this paper, we propose a new adaptive coding method to realize the former. We consider time-slotted scheduling protocols, which are a popular solution for contention-free and interference-free access in small-scale UWANs, and exploit the surplus guard time that occurs for individual links for improving transmission reliability. In particular, using link distances as side information, transmitters utilize the available portion of the time slot to adapt their code rate and increase reliability. Since increased reliability trades off with energy consumption per transmission, we optimize the code rate for best tradeoff, considering both single and multiple packet transmission using the incremental redundancy hybrid automatic repeat request (IR–HARQ) protocol. For practical implementation of this adaptive coding scheme, we consider punctured and rateless codes. Simulation results demonstrate the gains achieved by our coding scheme over fixed-rate error-correction codes in terms of both throughput and consumption of transmitted energy per successfully delivered packet. We also report results from a sea trial conducted at the Haifa harbor, which corroborate the simulations.",
""
]
} |
1904.09793 | 2936435387 | Point cloud based retrieval for place recognition is an emerging problem in vision field. The main challenge is how to find an efficient way to encode the local features into a discriminative global descriptor. In this paper, we propose a Point Contextual Attention Network (PCAN), which can predict the significance of each local point feature based on point context. Our network makes it possible to pay more attention to the task-relevent features when aggregating local features. Experiments on various benchmark datasets show that the proposed network can provide outperformance than current state-of-the-art approaches. | Handcrafted 3D Descriptors. Extracting robust local geometric descriptors has been a core problem in 3D vision. Some classic descriptors have been developed in the early years, such as Spin Images @cite_23 and Geometry Histograms @cite_26 . Recent works include Point Feature Histograms (PFH) @cite_1 , Fast Point Feature Histograms (FPFH) @cite_28 , Signature of Histogram Orientations(SHOT) @cite_5 . Some of these descriptors are already included in PCL @cite_21 . Most of these hand-crafted descriptors are designed for specific tasks and they are sensitive to noisy, incomplete RGB-D images captured by sensors. Besides, these methods focus on extracting local descriptors which are not applicable to extracting global features due to the huge computation. | {
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_21",
"@cite_1",
"@cite_23",
"@cite_5"
],
"mid": [
"1564871316",
"2160821342",
"2152864241",
"2098764590",
"2099606917",
"1972485825"
],
"abstract": [
"Recognition of three dimensional (3D) objects in noisy and cluttered scenes is a challenging problem in 3D computer vision. One approach that has been successful in past research is the regional shape descriptor. In this paper, we introduce two new regional shape descriptors: 3D shape contexts and harmonic shape contexts. We evaluate the performance of these descriptors on the task of recognizing vehicles in range scans of scenes using a database of 56 cars. We compare the two novel descriptors to an existing descriptor, the spin image, showing that the shape context based descriptors have a higher recognition rate on noisy scenes and that 3D shape contexts outperform the others on cluttered scenes.",
"In our recent work [1], [2], we proposed Point Feature Histograms (PFH) as robust multi-dimensional features which describe the local geometry around a point p for 3D point cloud datasets. In this paper, we modify their mathematical expressions and perform a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views. More concretely, we present several optimizations that reduce their computation times drastically by either caching previously computed values or by revising their theoretical formulations. The latter results in a new type of local features, called Fast Point Feature Histograms (FPFH), which retain most of the discriminative power of the PFH. Moreover, we propose an algorithm for the online computation of FPFH features for realtime applications. To validate our results we demonstrate their efficiency for 3D registration and propose a new sample consensus based method for bringing two datasets into the convergence basin of a local non-linear optimizer: SAC-IA (SAmple Consensus Initial Alignment).",
"With the advent of new, low-cost 3D sensing hardware such as the Kinect, and continued efforts in advanced point cloud processing, 3D perception gains more and more importance in robotics, as well as other fields. In this paper we present one of our most recent initiatives in the areas of point cloud perception: PCL (Point Cloud Library - http: pointclouds.org). PCL presents an advanced and extensive approach to the subject of 3D perception, and it's meant to provide support for all the common 3D building blocks that applications need. The library contains state-of-the art algorithms for: filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation. PCL is supported by an international community of robotics and perception researchers. We provide a brief walkthrough of PCL including its algorithmic capabilities and implementation strategies.",
"In this paper we investigate the usage of persistent point feature histograms for the problem of aligning point cloud data views into a consistent global model. Given a collection of noisy point clouds, our algorithm estimates a set of robust 16D features which describe the geometry of each point locally. By analyzing the persistence of the features at different scales, we extract an optimal set which best characterizes a given point cloud. The resulted persistent features are used in an initial alignment algorithm to estimate a rigid transformation that approximately registers the input datasets. The algorithm provides good starting points for iterative registration algorithms such as ICP (Iterative Closest Point), by transforming the datasets to its convergence basin. We show that our approach is invariant to pose and sampling density, and can cope well with noisy data coming from both indoor and outdoor laser scans.",
"We present a 3D shape-based object recognition system for simultaneous recognition of multiple objects in scenes containing clutter and occlusion. Recognition is based on matching surfaces by matching points using the spin image representation. The spin image is a data level shape descriptor that is used to match surfaces represented as surface meshes. We present a compression scheme for spin images that results in efficient multiple object recognition which we verify with results showing the simultaneous recognition of multiple objects from a library of 20 models. Furthermore, we demonstrate the robust performance of recognition in the presence of clutter and occlusion through analysis of recognition trials on 100 scenes.",
"AbstractThis paper presents alocal3D descriptor forsurface matching dubbed SHOT.Our proposal stems from a taxonomy of existing methods which highlightstwo major approaches, referred to as Signatures and Histograms, inher-ently emphasizing descriptiveness and robustness respectively. We formulatea comprehensive proposal which encompasses a repeatable local referenceframe as well as a 3D descriptor, the latter featuring an hybrid structurebetween Signatures and Histograms so as to aim at a more favorable bal-ance between descriptive power and robustness. A quite peculiar trait of ourmethod concerns seamless integration of multiple cues within the descrip-tor to improve distinctiveness, which is particularly relevant nowadays dueto the increasing availability of affordable RGB-D sensors which can gatherboth depth and color information. A thorough experimental evaluation basedon datasets acquired with different types of sensors, including a novel RGB-D dataset, vouches that SHOT outperforms state-of-the-art local descriptorsin experiments addressing descriptor matching for object recognition, 3Dreconstruction and shape retrieval.Keywords: Surface Matching, 3D Descriptors, Object Recognition, 3DReconstruction1. IntroductionAutomatic recognition of shapes in 3D data, also referred to as surfacematching, attracts growing interest in the research community, with appli-cation to areas such as shape retrieval, 3D reconstruction, object recogni-tion categorization, manipulation and grasping, robot localization and nav-igation. A key enabling factor for the development of this technology is"
]
} |
1904.09793 | 2936435387 | Point cloud based retrieval for place recognition is an emerging problem in vision field. The main challenge is how to find an efficient way to encode the local features into a discriminative global descriptor. In this paper, we propose a Point Contextual Attention Network (PCAN), which can predict the significance of each local point feature based on point context. Our network makes it possible to pay more attention to the task-relevent features when aggregating local features. Experiments on various benchmark datasets show that the proposed network can provide outperformance than current state-of-the-art approaches. | Learned 3D Global Descriptors. With the breakthroughs of some learning based 2D vision tasks over the past few years, e.g. image classification, object detection, more and more researchers focus on representing 3D geometry using learning methods. In the early days, several works use volumetric representations as network input and develop learned descriptors for object retrieval and classification @cite_20 @cite_34 @cite_18 . Recently researchers shifted to use raw point cloud @cite_32 @cite_2 @cite_39 . PointNet @cite_0 directly handles point cloud and uses symmetric function to make the output invariant to the order permutation of the input points. PointNet++ @cite_36 leverages neighborhoods at multiple scales to capture local structures. Several network architectures on point cloud are proposed in succession mainly for classification and segmentation. @cite_22 proposes Parametric Continuous Convolution, a new learnable operator that operates over non-grid structured data. All these works focus on extracting features from 3D data at a global level, but most of them aim at handling complete 3D models instead of 3D scanned data which is incomplete and noisy. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_36",
"@cite_32",
"@cite_39",
"@cite_0",
"@cite_2",
"@cite_34",
"@cite_20"
],
"mid": [
"2962731536",
"2798297823",
"2963121255",
"2796426482",
"2790466413",
"2560609797",
"2788158258",
"2211722331",
"1920022804"
],
"abstract": [
"3D shape models are becoming widely available and easier to capture, making available 3D information crucial for progress in object classification. Current state-of-theart methods rely on CNNs to address this problem. Recently, we witness two types of CNNs being developed: CNNs based upon volumetric representations versus CNNs based upon multi-view representations. Empirical results from these two types of CNNs exhibit a large gap, indicating that existing volumetric CNN architectures and approaches are unable to fully exploit the power of 3D representations. In this paper, we aim to improve both volumetric CNNs and multi-view CNNs according to extensive analysis of existing approaches. To this end, we introduce two distinct network architectures of volumetric CNNs. In addition, we examine multi-view CNNs, where we introduce multiresolution filtering in 3D. Overall, we are able to outperform current state-of-the-art methods for both volumetric CNNs and multi-view CNNs. We provide extensive experiments designed to evaluate underlying design choices, thus providing a better understanding of the space of methods available for object classification on 3D data.",
"Standard convolutional neural networks assume a grid structured input is available and exploit discrete convolutions as their fundamental building blocks. This limits their applicability to many real-world applications. In this paper we propose Parametric Continuous Convolution, a new learnable operator that operates over non-grid structured data. The key idea is to exploit parameterized kernel functions that span the full continuous vector space. This generalization allows us to learn over arbitrary data structures as long as their support relationship is computable. Our experiments show significant improvement over the state-of-the-art in point cloud segmentation of indoor and outdoor scenes, and lidar motion estimation of driving scenes.",
"Few prior works study deep learning on point sets. PointNet is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.",
"Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7 parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http: www.merl.com research license#FoldingNet",
"This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website. this https URL",
"Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.",
"We present a network architecture for processing point clouds that directly operates on a collection of points represented as a sparse set of samples in a high-dimensional lattice. NaA¯vely applying convolutions on this lattice scales poorly, both in terms of memory and computational cost, as the size of the lattice increases. Instead, our network uses sparse bilateral convolutional layers as building blocks. These layers maintain efficiency by using indexing structures to apply convolutions only on occupied parts of the lattice, and allow flexible specifications of the lattice structure enabling hierarchical and spatially-aware feature learning, as well as joint 2D-3D reasoning. Both point-based and image-based representations can be easily incorporated in a network with such layers and the resulting model can be trained in an end-to-end manner. We present results on 3D segmentation tasks where our approach outperforms existing state-of-the-art techniques.",
"Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.",
"3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks."
]
} |
1904.09793 | 2936435387 | Point cloud based retrieval for place recognition is an emerging problem in vision field. The main challenge is how to find an efficient way to encode the local features into a discriminative global descriptor. In this paper, we propose a Point Contextual Attention Network (PCAN), which can predict the significance of each local point feature based on point context. Our network makes it possible to pay more attention to the task-relevent features when aggregating local features. Experiments on various benchmark datasets show that the proposed network can provide outperformance than current state-of-the-art approaches. | Learned 3D Local Descriptors. 3DMatch @cite_9 uses voxel grid as input and introduced a 3D convolution network to distinguish the positive and negative pairs. Compact Geometric Features (CGF) @cite_12 uses histogram representation as network input to learn a compact local descriptor. PPFnet @cite_40 and PPF-FoldNet @cite_19 directly operate on points and uses a point pair feature encoding of the local 3D geometry into patches. 3DFeat-Net @cite_4 proposes a weakly supervised network that learns both 3D feature detector and descriptor. However, these descriptors aiming at extracting local features are difficult to be applied in extracting global features due to the data increasing. | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_19",
"@cite_40",
"@cite_12"
],
"mid": [
"2883357174",
"2566265240",
"2951730771",
"2962941647",
"2963666542"
],
"abstract": [
"In this paper, we propose the 3DFeat-Net which learns both 3D feature detector and descriptor for point cloud matching using weak supervision. Unlike many existing works, we do not require manual annotation of matching point clusters. Instead, we leverage on alignment and attention mechanisms to learn feature correspondences from GPS INS tagged 3D point clouds without explicitly specifying them. We create training and benchmark outdoor Lidar datasets, and experiments show that 3DFeat-Net obtains state-of-the-art performance on these gravity-aligned datasets.",
"Matching local geometric features on real-world depth images is a challenging task due to the noisy, low-resolution, and incomplete nature of 3D scan data. These difficulties limit the performance of current state-of-art methods, which are typically based on histograms over geometric properties. In this paper, we present 3DMatch, a data-driven model that learns a local volumetric patch descriptor for establishing correspondences between partial 3D data. To amass training data for our model, we propose a self-supervised feature learning method that leverages the millions of correspondence labels found in existing RGB-D reconstructions. Experiments show that our descriptor is not only able to match local geometry in new scenes for reconstruction, but also generalize to different tasks and spatial scales (e.g. instance-level object model alignment for the Amazon Picking Challenge, and mesh surface correspondence). Results show that 3DMatch consistently outperforms other state-of-the-art approaches by a significant margin. Code, data, benchmarks, and pre-trained models are available online at http: 3dmatch.cs.princeton.edu.",
"We present PPF-FoldNet for unsupervised learning of 3D local descriptors on pure point cloud geometry. Based on the folding-based auto-encoding of well known point pair features, PPF-FoldNet offers many desirable properties: it necessitates neither supervision, nor a sensitive local reference frame, benefits from point-set sparsity, is end-to-end, fast, and can extract powerful rotation invariant descriptors. Thanks to a novel feature visualization, its evolution can be monitored to provide interpretable insights. Our extensive experiments demonstrate that despite having six degree-of-freedom invariance and lack of training labels, our network achieves state of the art results in standard benchmark datasets and outperforms its competitors when rotations and varying point densities are present. PPF-FoldNet achieves @math higher recall on standard benchmarks, @math higher recall when rotations are introduced into the same datasets and finally, a margin of @math is attained when point density is significantly decreased.",
"We present PPFNet - Point Pair Feature NETwork for deeply learning a globally informed 3D local feature descriptor to find correspondences in unorganized point clouds. PPFNet learns local descriptors on pure geometry and is highly aware of the global context, an important cue in deep learning. Our 3D representation is computed as a collection of point-pair-features combined with the points and normals within a local vicinity. Our permutation invariant network design is inspired by PointNet and sets PPFNet to be ordering-free. As opposed to voxelization, our method is able to consume raw point clouds to exploit the full sparsity. PPFNet uses a novel N-tuple loss and architecture injecting the global information naturally into the local descriptor. It shows that context awareness also boosts the local feature representation. Qualitative and quantitative evaluations of our network suggest increased recall, improved robustness and invariance as well as a vital step in the 3D descriptor extraction performance.",
"We present an approach to learning features that represent the local geometry around a point in an unstructured point cloud. Such features play a central role in geometric registration, which supports diverse applications in robotics and 3D vision. Current state-of-the-art local features for unstructured point clouds have been manually crafted and none combines the desirable properties of precision, compactness, and robustness. We show that features with these properties can be learned from data, by optimizing deep networks that map high-dimensional histograms into low-dimensional Euclidean spaces. The presented approach yields a family of features, parameterized by dimension, that are both more compact and more accurate than existing descriptors."
]
} |
1904.09793 | 2936435387 | Point cloud based retrieval for place recognition is an emerging problem in vision field. The main challenge is how to find an efficient way to encode the local features into a discriminative global descriptor. In this paper, we propose a Point Contextual Attention Network (PCAN), which can predict the significance of each local point feature based on point context. Our network makes it possible to pay more attention to the task-relevent features when aggregating local features. Experiments on various benchmark datasets show that the proposed network can provide outperformance than current state-of-the-art approaches. | Point cloud based retrieval can be defined as a matching problem of the global descriptors of 3D point clouds. Mikaela and Gim @cite_35 first propose PointNetVLAD which is a deep network combining PointNet @cite_0 and NetVLAD @cite_25 to extract the global descriptor from a scanned 3D point cloud for retrieval task. Though PointNetVLAD is more efficient to add a NetVLAD layer to the global feature than just using the vanilla PointNet architecture, it does not discriminate the local features which positively contribute to the final global feature representations. Based on these observations, we add a context-aware attention mechanism to the global feature extraction pipeline. | {
"cite_N": [
"@cite_0",
"@cite_35",
"@cite_25"
],
"mid": [
"2560609797",
"2963708168",
"2179042386"
],
"abstract": [
"Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.",
"Unlike its image based counterpart, point cloud based retrieval for place recognition has remained as an unexplored and unsolved problem. This is largely due to the difficulty in extracting local feature descriptors from a point cloud that can subsequently be encoded into a global descriptor for the retrieval task. In this paper, we propose the PointNetVLAD where we leverage on the recent success of deep networks to solve point cloud based retrieval for place recognition. Specifically, our PointNetVLAD is a combination modification of the existing PointNet and NetVLAD, which allows end-to-end training and inference to extract the global descriptor from a given 3D point cloud. Furthermore, we propose the \"lazy triplet and quadruplet\" loss functions that can achieve more discriminative and generalizable global descriptors to tackle the retrieval task. We create benchmark datasets for point cloud based retrieval for place recognition, and the experimental results on these datasets show the feasibility of our PointNetVLAD. Our code and datasets are publicly available on the project website1.",
"We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following three principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the \"Vector of Locally Aggregated Descriptors\" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we develop a training procedure, based on a new weakly supervised ranking loss, to learn parameters of the architecture in an end-to-end manner from images depicting the same places over time downloaded from Google Street View Time Machine. Finally, we show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state of-the-art compact image representations on standard image retrieval benchmarks."
]
} |
1904.09810 | 2939158620 | We develop the Scott model of the programming language PCF in constructive predicative univalent type theory. To account for the non-termination in PCF, we work with the partial map classifier monad (also known as the lifting monad) from topos theory, which has been extended to constructive type theory by Knapp and Escardo. Our results show that lifting is a viable approach to partiality in univalent type theory. Moreover, we show that the Scott model can be constructed in a predicative and constructive setting. Other approaches to partiality either require some form of choice or higher inductive-inductive types. We show that one can do without these extensions. | Partiality in type theory has been the subject of recent study. We briefly discuss the different approaches. First, there are Capretta's delay monad and its quotient by weak bisimilarity, which have been studied by Uustula, Chapman and Veltri @cite_1 . A drawback of the quotient is that some form of choice is needed (countable choice suffices) to show that it is again a monad. Another approach is laid out in @cite_3 by Altenkirch, Danielsson and Kraus. They construct (essentially by definition) the free @math -cpo with a least element using a higher inductive-inductive type. Moreover, show that, assuming countable choice, their free @math -cpo coincides with the quotiented delay monad. In @cite_2 , Knapp showed that, assuming countable choice, a restricted version (using a dominance) of the lifting is isomorphic to the quotiented delay monad. | {
"cite_N": [
"@cite_1",
"@cite_3",
"@cite_2"
],
"mid": [
"2767001837",
"2963889233",
"2905240033"
],
"abstract": [
"",
"Capretta's delay monad can be used to model partial computations, but it has the wrong'' notion of built-in equality, strong bisimilarity. An alternative is to quotient the delay monad by the right''notion of equality, weak bisimilarity. However, recent work by suggests that it is impossible to define a monad structure on the resulting construction in common forms of type theory without assuming (instances of) the axiom of countable choice. Using an idea from homotopy type theory---a higher inductive-inductive type---we construct a partiality monad without relying on countable choice. We prove that, in the presence of countable choice, our partiality monad is equivalent to the delay monad quotiented by weak bisimilarity. Furthermore we outline several applications.",
"We investigate partial functions and computability theory from within a constructive, univalent type theory. The focus is on placing computability into a larger mathematical context, rather than on a complete development of computability theory. We begin with a treatment of partial functions, using the notion of dominance, which is used in synthetic domain theory to discuss classes of partial maps. We relate this and other ideas from synthetic domain theory to other approaches to partiality in type theory. We show that the notion of dominance is difficult to apply in our setting: the set of �0 1 propositions investigated by Rosolini form a dominance precisely if a weak, but nevertheless unprovable, choice principle holds. To get around this problem, we suggest an alternative notion of partial function we call disciplined maps. In the presence of countable choice, this notion coincides with Rosolini’s. Using a general notion of partial function,we take the first steps in constructive computability theory. We do this both with computability as structure, where we have direct access to programs; and with computability as property, where we must work in a program-invariant way. We demonstrate the difference between these two approaches by showing how these approaches relate to facts about computability theory arising from topos-theoretic and typetheoretic concerns. Finally, we tie the two threads together: assuming countable choice and that all total functions N - N are computable (both of which hold in the effective topos), the Rosolini partial functions, the disciplined maps, and the computable partial functions all coincide. We observe, however, that the class of all partial functions includes non-computable partial functions."
]
} |
1904.09810 | 2939158620 | We develop the Scott model of the programming language PCF in constructive predicative univalent type theory. To account for the non-termination in PCF, we work with the partial map classifier monad (also known as the lifting monad) from topos theory, which has been extended to constructive type theory by Knapp and Escardo. Our results show that lifting is a viable approach to partiality in univalent type theory. Moreover, we show that the Scott model can be constructed in a predicative and constructive setting. Other approaches to partiality either require some form of choice or higher inductive-inductive types. We show that one can do without these extensions. | Capretta's delay monad has been used to give a constructive approach to domain theory @cite_9 . However, the objects have the wrong equality'', so that every object comes with an equivalence relation that maps must preserve. The framework of univalent mathematics in which we have placed our development provides a more natural approach. Moreover, we do not make use of Coq's impredicative Prop universe and our treatment incorporates directed complete posets (dcpos) and not just @math -cpos. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1813662263"
],
"abstract": [
"We present a Coq formalization of constructive *** -cpos (extending earlier work by Paulin-Mohring) up to and including the inverse-limit construction of solutions to mixed-variance recursive domain equations, and the existence of invariant relations on those solutions. We then define operational and denotational semantics for both a simply-typed CBV language with recursion and an untyped CBV language, and establish soundness and adequacy results in each case."
]
} |
1904.09658 | 2937026839 | Embedding methods have achieved success in face recognition by comparing facial features in a latent semantic space. However, in a fully unconstrained face setting, the features learned by the embedding model could be ambiguous or may not even be present in the input face, leading to noisy representations. We propose Probabilistic Face Embeddings (PFEs), which represent each face image as a Gaussian distribution in the latent space. The mean of the distribution estimates the most likely feature values while the variance shows the uncertainty in the feature values. Probabilistic solutions can then be naturally derived for matching and fusing PFEs using the uncertainty information. Empirical evaluation on different baseline models, training datasets and benchmarks show that the proposed method can improve the face recognition performance of deterministic embeddings by converting them into PFEs. The uncertainties estimated by PFEs also serve as good indicators of the potential matching accuracy, which are important for a risk-controlled recognition system. | To improve the robustness and interpretability of discriminant Deep Neural Networks (DNNs), deep uncertainty learning is getting more attention @cite_2 @cite_50 @cite_34 . There are two main types of uncertainty: and . Model uncertainty refers to the uncertainty of model parameters given the training data and can be reduced by collecting additional training data @cite_9 @cite_13 @cite_2 @cite_50 . Data uncertainty accounts for the uncertainty in output whose primary source is the inherent noise in input data and hence cannot be eliminated with more training data @cite_34 . The uncertainty studied in our work can be categorized as data uncertainty. Techniques have been developed for estimating data uncertainty in different tasks, including classification and regression @cite_34 , where the target space is explicitly defined by labels. In contrast, probabilistic embeddings aim to estimate the uncertainty of the representations in latent spaces @cite_41 @cite_11 @cite_38 @cite_19 . Specific to face recognition, some studies @cite_20 @cite_8 @cite_37 have leveraged the model uncertainty for analysis and learning of face representations, but to our knowledge, ours is the first work that utilizes data uncertainty Some in the literature have also used the terminology data uncertainty" for a different purpose @cite_42 . for face recognition. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_8",
"@cite_41",
"@cite_9",
"@cite_42",
"@cite_19",
"@cite_50",
"@cite_2",
"@cite_34",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"2557579533",
"",
"2960055608",
"",
"2111051539",
"2039375240",
"2892605456",
"2964059111",
"2964144363",
"2600383743",
"1567512734",
"2757569040",
"2127265454"
],
"abstract": [
"We present a variational approximation to the information bottleneck of (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method \"Deep Variational Information Bottleneck\", or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.",
"",
"Learning unbiased models on imbalanced datasets is a significant challenge. Rare classes tend to get a concentrated representation in the classification space which hampers the generalization of learned boundaries to new test examples. In this paper, we demonstrate that the Bayesian uncertainty estimates directly correlate with the rarity of classes and the difficulty level of individual samples. Subsequently, we present a novel framework for uncertainty based class imbalance learning that follows two key insights: First, classification boundaries should be extended further away from a more uncertain (rare) class to avoid overfitting and enhance its generalization. Second, each sample should be modeled as a multi-variate Gaussian distribution with a mean vector and a covariance matrix defined by the sample's uncertainty. The learned boundaries should respect not only the individual samples but also their distribution in the feature space. Our proposed approach efficiently utilizes sample and class uncertainty information to learn robust features and more generalizable classifiers. We systematically study the class imbalance problem and derive a novel loss formulation for max-margin learning based on Bayesian uncertainty measure. The proposed method shows significant performance improvements on six benchmark datasets for face verification, attribute prediction, digit object classification and skin lesion detection.",
"",
"A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian \"evidence\" automatically embodies \"Occam's razor,\" penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.",
"The image of a face varies with the illumination, pose, and facial expression, thus we say that a single face image is of high uncertainty for representing the face. In this sense, a face image is just an observation and it should not be considered as the absolutely accurate representation of the face. As more face images from the same person provide more observations of the face, more face images may be useful for reducing the uncertainty of the representation of the face and improving the accuracy of face recognition. However, in a real world face recognition system, a subject usually has only a limited number of available face images and thus there is high uncertainty. In this paper, we attempt to improve the face recognition accuracy by reducing the uncertainty. First, we reduce the uncertainty of the face representation by synthesizing the virtual training samples. Then, we select useful training samples that are similar to the test sample from the set of all the original and synthesized virtual training samples. Moreover, we state a theorem that determines the upper bound of the number of useful training samples. Finally, we devise a representation approach based on the selected useful training samples to perform face recognition. Experimental results on five widely used face databases demonstrate that our proposed approach can not only obtain a high face recognition accuracy, but also has a lower computational complexity than the other state-of-the-art approaches.",
"Instance embeddings are an efficient and versatile image representation that facilitates applications like recognition, verification, retrieval, and clustering. Many metric learning methods represent the input as a single point in the embedding space. Often the distance between points is used as a proxy for match confidence. However, this can fail to represent uncertainty arising when the input is ambiguous, e.g., due to occlusion or blurriness. This work addresses this issue and explicitly models the uncertainty by hedging the location of each input in the embedding space. We introduce the hedged instance embedding (HIB) in which embeddings are modeled as random variables and the model is trained under the variational information bottleneck principle. Empirical results on our new N-digit MNIST dataset show that our method leads to the desired behavior of hedging its bets across the embedding space upon encountering ambiguous inputs. This results in improved performance for image matching and classification tasks, more structure in the learned embedding space, and an ability to compute a per-exemplar uncertainty measure that is correlated with downstream performance.",
"Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.",
"",
"There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model - uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.",
"From the Publisher: Artificial \"neural networks\" are now widely used as flexible models for regression classification applications, but questions remain regarding what these models mean, and how they can safely be used when training data is limited. Bayesian Learning for Neural Networks shows that Bayesian methods allow complex neural network models to be used without fear of the \"overfitting\" that can occur with traditional neural network learning methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. Use of these models in practice is made possible using Markov chain Monte Carlo techniques. Both the theoretical and computational aspects of this work are of wider statistical interest, as they contribute to a better understanding of how Bayesian methods can be applied to complex problems. Presupposing only the basic knowledge of probability and statistics, this book should be of interest to many researchers in statistics, engineering, and artificial intelligence. Software for Unix systems that implements the methods described is freely available over the Internet.",
"Face recognition is a widely used technology with numerous large-scale applications, such as surveillance, social media and law enforcement. There has been tremendous progress in face recognition accuracy over the past few decades, much of which can be attributed to deep learning-based approaches during the last five years. Indeed, automated face recognition systems are now believed to surpass human performance in some scenarios. Despite this progress, a crucial question still remains unanswered: given a face representation, how many identities can it resolve? In other words, what is the capacity of the face representation? A scientific basis for estimating the capacity of a given face representation will not only benefit the evaluation and comparison of different face representations but will also establish an upper bound on the scalability of an automatic face recognition system. We cast the face capacity estimation problem under the information theoretic framework of capacity of a Gaussian noise channel. By explicitly accounting for two sources of representational noise: epistemic uncertainty and aleatoric variability, our approach is able to estimate the capacity of any given face representation. To demonstrate the efficacy of our approach, we estimate the capacity of a 128-dimensional DNN based face representation, FaceNet, and that of the classical Eigenfaces representation of the same dimensionality. Our experiments on unconstrained faces indicate that, (a) our proposed model yields a capacity upper bound of 5.8x @math for FaceNet and 1x @math for Eigenfaces at a false acceptance rate (FAR) of 1 , (b) the face representation capacity reduces drastically as you lower the desired FAR (for FaceNet; the capacity at FAR of 0.1 and 0.001 is 2.4x @math and 7.0x @math , respectively), and (c) the empirical performance of FaceNet is significantly below the theoretical limit.",
"Current work in lexical distributed representations maps each word to a point vector in low-dimensional space. Mapping instead to a density provides many interesting advantages, including better capturing uncertainty about a representation and its relationships, expressing asymmetries more naturally than dot product or cosine similarity, and enabling more expressive parameterization of decision boundaries. This paper advocates for density-based distributed embeddings and presents a method for learning representations in the space of Gaussian distributions. We compare performance on various word embedding benchmarks, investigate the ability of these embeddings to model entailment and other asymmetric relationships, and explore novel properties of the representation."
]
} |
1904.09658 | 2937026839 | Embedding methods have achieved success in face recognition by comparing facial features in a latent semantic space. However, in a fully unconstrained face setting, the features learned by the embedding model could be ambiguous or may not even be present in the input face, leading to noisy representations. We propose Probabilistic Face Embeddings (PFEs), which represent each face image as a Gaussian distribution in the latent space. The mean of the distribution estimates the most likely feature values while the variance shows the uncertainty in the feature values. Probabilistic solutions can then be naturally derived for matching and fusing PFEs using the uncertainty information. Empirical evaluation on different baseline models, training datasets and benchmarks show that the proposed method can improve the face recognition performance of deterministic embeddings by converting them into PFEs. The uncertainties estimated by PFEs also serve as good indicators of the potential matching accuracy, which are important for a risk-controlled recognition system. | Modeling faces as probabilistic distributions is not a new idea. In the field of face template video matching, there exists abundant literature on modeling the faces as probabilistic distributions @cite_32 @cite_5 , subspace @cite_1 or manifolds @cite_5 @cite_17 in the feature space. However, the input for such methods is a set of face images rather than a single face image, and they use a between-distribution similarity or distance measure, KL-divergence, for comparison, which does not penalize the uncertainty. Meanwhile, some studies @cite_43 @cite_15 have attempted to build a fuzzy model of a given face using the features of face parts. In comparison, the proposed PFE represents each single face image as a distribution in the latent space encoded by DNNs and we use an uncertainty-aware log likelihood score to compare the distributions. | {
"cite_N": [
"@cite_1",
"@cite_32",
"@cite_43",
"@cite_5",
"@cite_15",
"@cite_17"
],
"mid": [
"1996939238",
"1506778248",
"2097729189",
"2122691893",
"2120007697",
"566612420"
],
"abstract": [
"We introduce a novel method for face recognition from image sets. In our setting each test and training example is a set of images of an individual's face, not just a single image, so recognition decisions need to be based on comparisons of image sets. Methods for this have two main aspects: the models used to represent the individual image sets; and the similarity metric used to compare the models. Here, we represent images as points in a linear or affine feature space and characterize each image set by a convex geometric region (the affine or convex hull) spanned by its feature points. Set dissimilarity is measured by geometric distances (distances of closest approach) between convex models. To reduce the influence of outliers we use robust methods to discard input points that are far from the fitted model. The kernel trick allows the approach to be extended to implicit feature mappings, thus handling complex and nonlinear manifolds of face images. Experiments on two public face datasets show that our proposed methods outperform a number of existing state-of-the-art ones.",
"We address the problem of face recognition from a large set of images obtained over time - a task arising in many surveillance and authentication applications. A set or a sequence of images provides information about the variability in the appearance of the face which can be used for more robust recognition. We discuss different approaches to the use of this information, and show that when cast as a statistical hypothesis testing problem, the classification task leads naturally to an information-theoretic algorithm that classifies sets of images using the relative entropy (Kullback-Leibler divergence) between the estimated density of the input set and that of stored collections of images for each class. We demonstrate the performance of the proposed algorithm on two medium-sized data sets of approximately frontal face images, and describe an application of the method as part of a view-independent recognition system.",
"Pose variation remains to be a major challenge for real-world face recognition. We approach this problem through a probabilistic elastic matching method. We take a part based representation by extracting local features (e.g., LBP or SIFT) from densely sampled multi-scale image patches. By augmenting each feature with its location, a Gaussian mixture model (GMM) is trained to capture the spatial-appearance distribution of all face images in the training corpus. Each mixture component of the GMM is confined to be a spherical Gaussian to balance the influence of the appearance and the location terms. Each Gaussian component builds correspondence of a pair of features to be matched between two faces face tracks. For face verification, we train an SVM on the vector concatenating the difference vectors of all the feature pairs to decide if a pair of faces face tracks is matched or not. We further propose a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces face tracks, which consistently improves face verification accuracy. Our experiments show that our method outperforms the state-of-the-art in the most restricted protocol on Labeled Face in the Wild (LFW) and the YouTube video face database by a significant margin.",
"In many automatic face recognition applications, a set of a person's face images is available rather than a single image. In this paper, we describe a novel method for face recognition using image sets. We propose a flexible, semi-parametric model for learning probability densities confined to highly non-linear but intrinsically low-dimensional manifolds. The model leads to a statistical formulation of the recognition problem in terms of minimizing the divergence between densities estimated on these manifolds. The proposed method is evaluated on a large data set, acquired in realistic imaging conditions with severe illumination variation. Our algorithm is shown to match the best and outperform other state-of-the-art algorithms in the literature, achieving 94 recognition rate on average.",
"Face is one of the important biometric identifier used for human recognition. The face recognition involves the computation of similarity between face images belonging to the determination of the identity of the face. The accurate recognition of face images is essential for the applications including credit card authentication, passport identification, internet security, criminal databases, biometric cryptosystems etc. Due to the increasing need for the surveillance and security related applications in access control, law enforcement, and information safety due to criminal activities, the research interest in the face recognition has grown considerably in the domain of the pattern recognition and image analysis. A number of approaches for face recognition have been proposed in the literature ( 2000), ( 1995). Many researchers have addressed face recognition based on geometrical features and template matching (Brunelli and Poggio, 1993). There are several well known face recognition methods such as Eigenfaces (Turk and Pentland 1991), Fisherfaces ( 1997), (Kim and Kitter 2005), Laplacianfaces ( 2005). The wavelet based Gabor function provide a favorable trade off between spatial resolution and frequency resolution (Gabor 1946). Gabor wavelets render superior representation for face recognition (Zhang, et al 2005), (Shan, et al 2004), (Olugbenga and Yang 2002). In recent survey, various potential problems and challenges in the face detection are explored (Yang, M.H., et al, 2002). Recent face detection methods based on data-driven learning techniques, such as the statistical modeling methods (Moghaddam and Pentland 1997), (Schneiderman, and Kanade, 2000), (Shih and Liu 2004), the statistical learning theory and SVM based methods (, 2001). Schneiderman and Kanade have developed the first algorithm that can reliably detect human faces with out-of-plane rotation and the first algorithm that can reliably detect passenger cars over a wide range of viewpoints (Schneiderman and Kanade 2000). The segmentation of potential face region in a digital image is a prelude to the face detection, since the search for the facial features is confined to the segmented face region. Several approaches have been used so far for the detection of face regions using skin color information. In (Wu, H.Q., et al, 1999), a face is detected using a fuzzy pattern matching method based on skin and hair color. This method has high detection rate, but it fails if the hair is not black and the face region is not elliptic. A face detection algorithm for color images using a skin-tone color model and facial features is",
"The manifold of Symmetric Positive Definite (SPD) matrices has been successfully used for data representation in image set classification. By endowing the SPD manifold with Log-Euclidean Metric, existing methods typically work on vector-forms of SPD matrix logarithms. This however not only inevitably distorts the geometrical structure of the space of SPD matrix logarithms but also brings low efficiency especially when the dimensionality of SPD matrix is high. To overcome this limitation, we propose a novel metric learning approach to work directly on logarithms of SPD matrices. Specifically, our method aims to learn a tangent map that can directly transform the matrix logarithms from the original tangent space to a new tangent space of more discriminability. Under the tangent map framework, the novel metric learning can then be formulated as an optimization problem of seeking a Mahalanobis-like matrix, which can take the advantage of traditional metric learning techniques. Extensive evaluations on several image set classification tasks demonstrate the effectiveness of our proposed metric learning method."
]
} |
1904.09658 | 2937026839 | Embedding methods have achieved success in face recognition by comparing facial features in a latent semantic space. However, in a fully unconstrained face setting, the features learned by the embedding model could be ambiguous or may not even be present in the input face, leading to noisy representations. We propose Probabilistic Face Embeddings (PFEs), which represent each face image as a Gaussian distribution in the latent space. The mean of the distribution estimates the most likely feature values while the variance shows the uncertainty in the feature values. Probabilistic solutions can then be naturally derived for matching and fusing PFEs using the uncertainty information. Empirical evaluation on different baseline models, training datasets and benchmarks show that the proposed method can improve the face recognition performance of deterministic embeddings by converting them into PFEs. The uncertainties estimated by PFEs also serve as good indicators of the potential matching accuracy, which are important for a risk-controlled recognition system. | In contrast to the methods above, recent work on face template video matching aims to leverage the saliency of deep CNN embeddings by aggregating the deep features of all faces into a single compact vector @cite_25 @cite_35 @cite_24 @cite_3 . In these methods, a separate module learns to predict the quality of each face in the image set, which is then normalized for a weighted pooling of feature vectors. We show that a solution can be naturally derived under our framework, which not only gives a probabilistic explanation for quality-aware pooling methods, but also leads to a more general solution where an image set can also be modeled as a PFE representation. | {
"cite_N": [
"@cite_24",
"@cite_35",
"@cite_25",
"@cite_3"
],
"mid": [
"2964254778",
"2963216120",
"2963559058",
"2915211455"
],
"abstract": [
"",
"This paper targets on the problem of set to set recognition, which learns the metric between two image sets. Images in each set belong to the same identity. Since images in a set can be complementary, they hopefully lead to higher accuracy in practical applications. However, the quality of each sample cannot be guaranteed, and samples with poor quality will hurt the metric. In this paper, the quality aware network (QAN) is proposed to confront this problem, where the quality of each sample can be automatically learned although such information is not explicitly provided in the training stage. The network has two branches, where the first branch extracts appearance feature embedding for each sample and the other branch predicts quality score for each sample. Features and quality scores of all samples in a set are then aggregated to generate the final feature embedding. We show that the two branches can be trained in an end-to-end manner given only the set-level identity annotation. Analysis on gradient spread of this mechanism indicates that the quality learned by the network is beneficial to set-to-set recognition and simplifies the distribution that the network needs to fit. Experiments on both face verification and person re-identification show advantages of the proposed QAN. The source code and network structure can be downloaded at GitHub.",
"This paper presents a Neural Aggregation Network (NAN) for video face recognition. The network takes a face video or face image set of a person with a variable number of face images as its input, and produces a compact, fixed-dimension feature representation for recognition. The whole network is composed of two modules. The feature embedding module is a deep Convolutional Neural Network (CNN) which maps each face image to a feature vector. The aggregation module consists of two attention blocks which adaptively aggregate the feature vectors to form a single feature inside the convex hull spanned by them. Due to the attention mechanism, the aggregation is invariant to the image order. Our NAN is trained with a standard classification or verification loss without any extra supervision signal, and we found that it automatically learns to advocate high-quality face images while repelling low-quality ones such as blurred, occluded and improperly exposed faces. The experiments on IJB-A, YouTube Face, Celebrity-1000 video face recognition benchmarks show that it consistently outperforms naive aggregation methods and achieves the state-of-the-art accuracy.",
"We propose a new approach to video face recognition. Our component-wise feature aggregation network (C-FAN) accepts a set of face images of a subject as an input, and outputs a single feature vector as the face representation of the set for the recognition task. The whole network is trained in two steps: (i) train a base CNN for still image face recognition; (ii) add an aggregation module to the base network to learn the quality value for each feature component, which adaptively aggregates deep feature vectors into a single vector to represent the face in a video. C-FAN automatically learns to retain salient face features with high quality scores while suppressing features with low quality scores. The experimental results on three benchmark datasets, YouTube Faces, IJB-A, and IJB-S show that the proposed C-FAN network is capable of generating a compact feature vector with 512 dimensions for a video sequence by efficiently aggregating feature vectors of all the video frames to achieve state of the art performance."
]
} |
1904.09149 | 2937282529 | Distillation-based learning boosts the performance of the miniaturized neural network based on the hypothesis that the representation of a teacher model can be used as structured and relatively weak supervision, and thus would be easily learned by a miniaturized model. However, we find that the representation of a converged heavy model is still a strong constraint for training a small student model, which leads to a high lower bound of congruence loss. In this work, inspired by curriculum learning we consider the knowledge distillation from the perspective of curriculum learning by routing. Instead of supervising the student model with a converged teacher model, we supervised it with some anchor points selected from the route in parameter space that the teacher model passed by, as we called route constrained optimization (RCO). We experimentally demonstrate this simple operation greatly reduces the lower bound of congruence loss for knowledge distillation, hint and mimicking learning. On close-set classification tasks like CIFAR100 and ImageNet, RCO improves knowledge distillation by 2.14 and 1.5 respectively. For the sake of evaluating the generalization, we also test RCO on the open-set face recognition task MegaFace. | Hint-based learning is often used for open-set classification such as face recognition and person Re-identification. FitNet @cite_4 firstly introduced more supervision by exploiting intermediate-level feature maps from the hidden layers of teacher to guide training process of student. Afterward, Zagoruyko al @cite_26 proposed the method to transfer attention maps from teacher to student. Yim al @cite_28 defined the distilled knowledge from teacher network as the flow of the solution process (FSP), which is calculated by the inner product between feature maps from two selected layers. | {
"cite_N": [
"@cite_28",
"@cite_26",
"@cite_4"
],
"mid": [
"2739879705",
"2561238782",
"1690739335"
],
"abstract": [
"We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN performs a mapping from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena: (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model, (2) the student DNN outperforms the original DNN, and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.",
"Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures.",
"While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network."
]
} |
1904.09405 | 2939140832 | Scene text recognition has recently been widely treated as a sequence-to-sequence prediction problem, where traditional fully-connected-LSTM (FC-LSTM) has played a critical role. Due to the limitation of FC-LSTM, existing methods have to convert 2-D feature maps into 1-D sequential feature vectors, resulting in severe damages of the valuable spatial and structural information of text images. In this paper, we argue that scene text recognition is essentially a spatiotemporal prediction problem for its 2-D image inputs, and propose a convolution LSTM (ConvLSTM)-based scene text recognizer, namely, FACLSTM, i.e., Focused Attention ConvLSTM, where the spatial correlation of pixels is fully leveraged when performing sequential prediction with LSTM. Particularly, the attention mechanism is properly incorporated into an efficient ConvLSTM structure via the convolutional operations and additional character center masks are generated to help focus attention on right feature areas. The experimental results on benchmark datasets IIIT5K, SVT and CUTE demonstrate that our proposed FACLSTM performs competitively on the regular, low-resolution and noisy text images, and outperforms the state-of-the-art approaches on the curved text with large margins. | The existing scene text recognizers can be grouped into two categories, , the ones utilizing traditional techniques and the ones based on deep learning techniques. Methods belonging to the first category were mainly proposed before 2015, and follow a bottom-up routine, , detecting and recognizing individual characters first, followed by word formation. @cite_31 provided a comprehensive survey for these methods. By contrast, the deep learning-based recognizers depend on end-to-end trainable deep networks, where feature extraction and sequential translation are integrated into one unified framework. According to literature, the deep learning-based recognizers are now the dominant solutions to scene text recognition, and surpass traditional ones by large margins. Therefore, in this section, we only review recognizers applying deep learning techniques, along with ConvLSTM and related variants. | {
"cite_N": [
"@cite_31"
],
"mid": [
"2135231474"
],
"abstract": [
"This paper analyzes, compares, and contrasts technical challenges, methods, and the performance of text detection and recognition research in color imagery. It summarizes the fundamental problems and enumerates factors that should be considered when addressing these problems. Existing techniques are categorized as either stepwise or integrated and sub-problems are highlighted including text localization, verification, segmentation and recognition. Special issues associated with the enhancement of degraded text and the processing of video text, multi-oriented, perspectively distorted and multilingual text are also addressed. The categories and sub-categories of text are illustrated, benchmark datasets are enumerated, and the performance of the most representative approaches is compared. This review provides a fundamental comparison and analysis of the remaining problems in the field."
]
} |
1904.09405 | 2939140832 | Scene text recognition has recently been widely treated as a sequence-to-sequence prediction problem, where traditional fully-connected-LSTM (FC-LSTM) has played a critical role. Due to the limitation of FC-LSTM, existing methods have to convert 2-D feature maps into 1-D sequential feature vectors, resulting in severe damages of the valuable spatial and structural information of text images. In this paper, we argue that scene text recognition is essentially a spatiotemporal prediction problem for its 2-D image inputs, and propose a convolution LSTM (ConvLSTM)-based scene text recognizer, namely, FACLSTM, i.e., Focused Attention ConvLSTM, where the spatial correlation of pixels is fully leveraged when performing sequential prediction with LSTM. Particularly, the attention mechanism is properly incorporated into an efficient ConvLSTM structure via the convolutional operations and additional character center masks are generated to help focus attention on right feature areas. The experimental results on benchmark datasets IIIT5K, SVT and CUTE demonstrate that our proposed FACLSTM performs competitively on the regular, low-resolution and noisy text images, and outperforms the state-of-the-art approaches on the curved text with large margins. | As explained in @cite_29 , the main drawback of traditional FC-LSTM was its usage of full connections in the input-to-state and state-to-state transitions, which resulted in the neglect of spatial information. To retain such important information, ConvLSTM, proposed by @cite_29 , replaced all of the full connections of traditional FC-LSTM with convolutional operations, and extended the 2-D features and states into 3-D, as shown in Fig. . Their experimental results demonstrated the superiority of ConvLSTM over traditional FC-LSTM. Thereafter, some variants of ConvLSTM have been developed for action recognition @cite_6 , object detection in video @cite_23 , and gesture recognition @cite_27 @cite_3 etc. For example, @cite_25 combined ConvLSTM with the 3-D convolution in a multimodal model, and achieved promising gesture recognition performance. @cite_6 designed a motion-based attention mechanism and combined it with ConvLSTM in their VideoLSTM, which is proposed for action recognition in videos. | {
"cite_N": [
"@cite_29",
"@cite_3",
"@cite_6",
"@cite_27",
"@cite_23",
"@cite_25"
],
"mid": [
"1485009520",
"",
"",
"2891150346",
"2963212638",
"2595328592"
],
"abstract": [
"The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. Very few previous studies have examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem in which both the input and the prediction target are spatiotemporal sequences. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, we propose the convolutional LSTM (ConvLSTM) and use it to build an end-to-end trainable model for the precipitation nowcasting problem. Experiments show that our ConvLSTM network captures spatiotemporal correlations better and consistently outperforms FC-LSTM and the state-of-the-art operational ROVER algorithm for precipitation nowcasting.",
"",
"",
"Convolutional long short-term memory (LSTM) networks have been widely used for action gesture recognition, and different attention mechanisms have also been embedded into the LSTM or the convolutional LSTM (ConvLSTM) networks. Based on the previous gesture recognition architectures which combine the three-dimensional convolution neural network (3DCNN) and ConvLSTM, this paper explores the effects of attention mechanism in ConvLSTM. Several variants of ConvLSTM are evaluated: (a) Removing the convolutional structures of the three gates in ConvLSTM, (b) Applying the attention mechanism on the input of ConvLSTM, (c) Reconstructing the input and (d) output gates respectively with the modified channel-wise attention mechanism. The evaluation results demonstrate that the spatial convolutions in the three gates scarcely contribute to the spatiotemporal feature fusion, and the attention mechanisms embedded into the input and output gates cannot improve the feature fusion. In other words, ConvLSTM mainly contributes to the temporal fusion along with the recurrent steps to learn the long-term spatiotemporal features, when taking as input the spatial or spatiotemporal features. On this basis, a new variant of LSTM is derived, in which the convolutional structures are only embedded into the input-to-state transition of LSTM. The code of the LSTM variants is publicly available.",
"This paper introduces an online model for object detection in videos designed to run in real-time on low-powered mobile and embedded devices. Our approach combines fast single-image object detection with convolutional long short term memory (LSTM) layers to create an inter-weaved recurrent-convolutional architecture. Additionally, we propose an efficient Bottleneck-LSTM layer that significantly reduces computational cost compared to regular LSTMs. Our network achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate feature maps across frames. This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. Our model reaches a real-time inference speed of up to 15 FPS on a mobile CPU.",
"Gesture recognition aims to recognize meaningful movements of human bodies, and is of utmost importance in intelligent human–computer robot interactions. In this paper, we present a multimodal gesture recognition method based on 3-D convolution and convolutional long-short-term-memory (LSTM) networks. The proposed method first learns short-term spatiotemporal features of gestures through the 3-D convolutional neural network, and then learns long-term spatiotemporal features by convolutional LSTM networks based on the extracted short-term spatiotemporal features. In addition, fine-tuning among multimodal data is evaluated, and we find that it can be considered as an optional skill to prevent overfitting when no pre-trained models exist. The proposed method is verified on the ChaLearn LAP large-scale isolated gesture data set (IsoGD) and the Sheffield Kinect gesture (SKIG) data set. The results show that our proposed method can obtain the state-of-the-art recognition accuracy (51.02 on the validation set of IsoGD and 98.89 on SKIG)."
]
} |
1904.09307 | 2938231299 | Given a mapped environment, we formulate the problem of visually tracking and following an evader using a probabilistic framework. In this work, we consider a non-holonomic robot with a limited visibility depth sensor in an indoor environment with obstacles. The mobile robot that follows the target is considered a pursuer and the agent being followed is considered an evader. We propose a probabilistic framework for both the pursuer and evader to achieve their conflicting goals. We introduce a smart evader that has information about the location of the pursuer. The goal of this variant of the evader is to avoid being tracked by the pursuer by using the visibility region information obtained from the pursuer, to further challenge the proposed smart pursuer. To validate the efficiency of the framework, we conduct several experiments in simulation by using Gazebo and evaluate the success rate of tracking an evader in various environments with different pursuer to evader speed ratios. Through our experiments we validate our hypothesis that a smart pursuer tracks an evader more effectively than a pursuer that just navigates in the environment randomly. We also validate that an evader that is aware of the actions of the pursuer is more successful at avoiding getting tracked by a smart pursuer than a random evader. Finally, we empirically show that while a smart pursuer does increase it's average success rate of tracking compared to a random pursuer, there is an increased variance in its success rate distribution when the evader becomes aware of its actions. | Megiddo @cite_12 proved that the complexity of searching a graph, which is equivalent to the search number, is NP-hard. In both these cases, visibility is not considered. The evader is found or captured if and only if both pursuer and evader are on the same vertices of the graph. The cop number is defined as a theoretical limit on the number of pursuers that can be used on a particular graph to successfully capture an evader. The cop number varies for different graphs based on the structure (topology) of the graph. Since we use a single pursuer in our work, we need not consider the cop number, instead we focus on finding a goal location for the pursuer that will maximize it's tracking rate. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2154788235"
],
"abstract": [
"T. Parsons originally proposed and studied the following pursuit-evasion problem on graphs: Members of a team of searchers traverse the edges of a graph G in pursuit of a fugitive, who moves along the edges of the graph with complete knowledge of the locations of the pursuers. What is the smallest number s ( G ) of searchers that will suffice for guaranteeing capture of the fugitive? It is shown that determining whether s ( G ) ≤ K , for a given integer K , is NP-complete for general graphs but can be solved in linear time for trees. We also provide a structural characterization of those graphs G with s ( G ) ≤ K for K = 1, 2, 3."
]
} |
1904.09307 | 2938231299 | Given a mapped environment, we formulate the problem of visually tracking and following an evader using a probabilistic framework. In this work, we consider a non-holonomic robot with a limited visibility depth sensor in an indoor environment with obstacles. The mobile robot that follows the target is considered a pursuer and the agent being followed is considered an evader. We propose a probabilistic framework for both the pursuer and evader to achieve their conflicting goals. We introduce a smart evader that has information about the location of the pursuer. The goal of this variant of the evader is to avoid being tracked by the pursuer by using the visibility region information obtained from the pursuer, to further challenge the proposed smart pursuer. To validate the efficiency of the framework, we conduct several experiments in simulation by using Gazebo and evaluate the success rate of tracking an evader in various environments with different pursuer to evader speed ratios. Through our experiments we validate our hypothesis that a smart pursuer tracks an evader more effectively than a pursuer that just navigates in the environment randomly. We also validate that an evader that is aware of the actions of the pursuer is more successful at avoiding getting tracked by a smart pursuer than a random evader. Finally, we empirically show that while a smart pursuer does increase it's average success rate of tracking compared to a random pursuer, there is an increased variance in its success rate distribution when the evader becomes aware of its actions. | Using game theory, @cite_9 formulated the pursuit-evasion problem as a partial information Markov game, and proposed a Nash solution to the resulting one-step non-zero sum game, provided the evader has access to the pursuer's information. For the visibility based variant, where both pursuer and evader are holonomic with bounded speeds and both have complete knowledge of the map, @cite_4 presented strategies for players that are in Nash equilibrium. | {
"cite_N": [
"@cite_9",
"@cite_4"
],
"mid": [
"2113367504",
"1971322941"
],
"abstract": [
"This paper addresses the control of a team of autonomous agents pursuing a smart evader in a non-accurately mapped terrain. By describing the problem as a partial information Markov game, we are able to integrate map-learning and pursuit. We propose receding horizon control policies, in which the pursuers and the evader try to respectively maximize and minimize the probability of capture at the next time instant. Since this probability is conditioned to distinct observations for each team, the resulting game is nonzero-sum. When the evader has access to the pursuers' information, we show that a Nash solution to the one-step nonzero-sum game always exists. Moreover, we propose a method to compute the Nash equilibrium policies by solving an equivalent zero-sum matrix game. A simulation example shows the feasibility of the proposed approach.",
"In this paper, we present a game-theoretic analysis of a visibility-based pursuit-evasion game in a planar environment containing obstacles. The pursuer and the evader are holonomic having bounded speeds. Both players have a complete map of the environment. Both players have omnidirectional vision and have knowledge about each other's current position as long as they are visible to each other. The pursuer wants to maintain visibility of the evader for the maximum possible time and the evader wants to escape the pursuer's sight as soon as possible. Under this information structure, we present necessary and sufficient conditions for surveillance and escape. We present strategies for the players that are in Nash equilibrium. The strategies are a function of the value of the game. Using these strategies, we construct a value function by integrating the adjoint equations backward in time from the termination situations provided by the corners in the environment. From these value functions we recompute the control strategies for the players to obtain optimal trajectories for the players near the termination situation. This is the first work that presents the necessary and sufficient conditions for tracking for a visibility based pursuit-evasion game and presents the equilibrium strategies for the players."
]
} |
1904.09307 | 2938231299 | Given a mapped environment, we formulate the problem of visually tracking and following an evader using a probabilistic framework. In this work, we consider a non-holonomic robot with a limited visibility depth sensor in an indoor environment with obstacles. The mobile robot that follows the target is considered a pursuer and the agent being followed is considered an evader. We propose a probabilistic framework for both the pursuer and evader to achieve their conflicting goals. We introduce a smart evader that has information about the location of the pursuer. The goal of this variant of the evader is to avoid being tracked by the pursuer by using the visibility region information obtained from the pursuer, to further challenge the proposed smart pursuer. To validate the efficiency of the framework, we conduct several experiments in simulation by using Gazebo and evaluate the success rate of tracking an evader in various environments with different pursuer to evader speed ratios. Through our experiments we validate our hypothesis that a smart pursuer tracks an evader more effectively than a pursuer that just navigates in the environment randomly. We also validate that an evader that is aware of the actions of the pursuer is more successful at avoiding getting tracked by a smart pursuer than a random evader. Finally, we empirically show that while a smart pursuer does increase it's average success rate of tracking compared to a random pursuer, there is an increased variance in its success rate distribution when the evader becomes aware of its actions. | Some researchers have utilized the information about the game environment to create partition regions where the pursuers and evaders might traverse to. One such method @cite_5 converts an occupancy grid map to a graph with restrictions by partitioning the workspace. They decompose the occupancy grid using Generalized Voronoi Diagrams to obtain a reduced graph representation of the environment with reduced action space which makes it easy for the pursuer to traverse in the environment. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2159197431"
],
"abstract": [
"In this paper we present a novel graph theoretic problem, called GRAPH-CLEAR, useful to model surveillance tasks where multiple robots are used to detect all possible intruders in a given indoor environment. We provide a formal definition of the problem and we investigate its basic theoretical properties, showing that the problem is NP-complete. We then present an algorithm to compute a strategy for the restriction of the problem to trees and present a method how to use this solution in applications. The method is then tested in simple simulations. GRAPH-CLEAR is useful to describe multirobot pursuit evasion games when robots have limited sensing capabilities, i.e. multiple agents are needed to perform basic patrolling operations."
]
} |
1904.09307 | 2938231299 | Given a mapped environment, we formulate the problem of visually tracking and following an evader using a probabilistic framework. In this work, we consider a non-holonomic robot with a limited visibility depth sensor in an indoor environment with obstacles. The mobile robot that follows the target is considered a pursuer and the agent being followed is considered an evader. We propose a probabilistic framework for both the pursuer and evader to achieve their conflicting goals. We introduce a smart evader that has information about the location of the pursuer. The goal of this variant of the evader is to avoid being tracked by the pursuer by using the visibility region information obtained from the pursuer, to further challenge the proposed smart pursuer. To validate the efficiency of the framework, we conduct several experiments in simulation by using Gazebo and evaluate the success rate of tracking an evader in various environments with different pursuer to evader speed ratios. Through our experiments we validate our hypothesis that a smart pursuer tracks an evader more effectively than a pursuer that just navigates in the environment randomly. We also validate that an evader that is aware of the actions of the pursuer is more successful at avoiding getting tracked by a smart pursuer than a random evader. Finally, we empirically show that while a smart pursuer does increase it's average success rate of tracking compared to a random pursuer, there is an increased variance in its success rate distribution when the evader becomes aware of its actions. | A Pursuit-evasion game can also be modeled with a probabilistic framework @cite_13 . In such a framework, each agent is governed by its unique transition probability function that is dependent on a couple of factors: the agent's actions and an observation probability function that estimates the location of obstacles and the other agent's location. The pursuer agent tries to maximize the probability of capturing the evader at every instant, and the evader tries to minimize the probability of getting captured. Probabilistic pursuit-evasion games may not produce optimal policy solutions that reduce expected capture time, but they can compute an efficient sub-optimal policy with good performance using greedy algorithms. Our work falls into this category, since we model the pursuer as a probabilistic agent that tries to maximize its success rate of tracking the evader. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2160484294"
],
"abstract": [
"We consider the problem of having a team of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) pursue a second team of evaders while concurrently building a map in an unknown environment. We cast the problem in a probabilistic game theoretical framework, and consider two computationally feasible greedy pursuit policies: local-mar and global-max. To implement this scenario on real UAVs and UGVs, we propose a distributed hierarchical hybrid system architecture which emphasizes the autonomy of each agent, yet allows for coordinated team efforts. We describe the implementation of the architecture on a fleet of UAVs and UGVs, detailing components such as high-level pursuit policy computation, map building and interagent communication, and low-level navigation, sensing, and control. We present both simulation and experimental results of real pursuit-evasion games involving our fleet of UAVs and UGVs, and evaluate the pursuit policies relating expected capture times to the speed and intelligence of the evaders and the sensing capabilities of the pursuers."
]
} |
1904.09186 | 2938020605 | We consider the problem of stable recovery of sparse signals of the form @math from their spectral measurements, known in a bandwidth @math with absolute error not exceeding @math . We consider the case when at most @math nodes @math of @math form a cluster of size @math , while the rest of the nodes are well separated. Provided that @math , we show that the minimax error rate for reconstruction of the cluster nodes is of order @math , while for recovering the corresponding amplitudes @math the rate is of the order @math . Moreover, the corresponding minimax rates for the recovery of the non-clustered nodes and amplitudes are @math and @math , respectively. Our numerical experiments show that the well-known Matrix Pencil method achieves the above accuracy bounds. These results suggest that stable super-resolution is possible in much more general situations than previously thought, and have implications for analyzing stability of super-resolution algorithms in this regime. | Stable super-resolution in the on-grid'' setting of @cite_9 @cite_1 @cite_17 is closely related to the smallest singular value of a certain class of Fourier-type matrices. Using the decimation technique, we have shown in a recent paper @cite_24 that the asymptotic scaling of the condition number for on-grid super-resolution is @math , matching the off-grid setting of the present paper. This result extends and generalizes previously known bounds @cite_36 @cite_10 @cite_43 @cite_39 , as well as recent works @cite_0 @cite_17 . | {
"cite_N": [
"@cite_36",
"@cite_9",
"@cite_1",
"@cite_39",
"@cite_24",
"@cite_43",
"@cite_0",
"@cite_10",
"@cite_17"
],
"mid": [
"2963373272",
"2114789426",
"2018778093",
"1554357707",
"2892315708",
"2030858745",
"2904908797",
"2060929722",
"2755233754"
],
"abstract": [
"Abstract We derive bounds on the extremal singular values and the condition number of N × K , with N ⩾ K , Vandermonde matrices with nodes in the unit disk. The mathematical techniques we develop to prove our main results are inspired by a link—first established by Selberg [1] and later extended by Moitra [2] —between the extremal singular values of Vandermonde matrices with nodes on the unit circle and large sieve inequalities. Our main conceptual contribution lies in establishing a connection between the extremal singular values of Vandermonde matrices with nodes in the unit disk and a novel large sieve inequality involving polynomials in z ∈ C with | z | ⩽ 1 . Compared to Bazan's upper bound on the condition number [3] , which, to the best of our knowledge, constitutes the only analytical result—available in the literature—on the condition number of Vandermonde matrices with nodes in the unit disk, our bound not only takes a much simpler form, but is also sharper for certain node configurations. Moreover, the bound we obtain can be evaluated consistently in a numerically stable fashion, whereas the evaluation of Bazan's bound requires the solution of a linear system of equations which has the same condition number as the Vandermonde matrix under consideration and can therefore lead to numerical instability in practice. As a byproduct, our result—when particularized to the case of nodes on the unit circle—slightly improves upon the Selberg–Moitra bound.",
"We consider the problem of robustly recovering a @math -sparse coefficient vector from the Fourier series that it generates, restricted to the interval @math . The difficulty of this problem is linked to the superresolution factor SRF, equal to the ratio of the Rayleigh length (inverse of @math ) by the spacing of the grid supporting the sparse vector. In the presence of additive deterministic noise of norm @math , we show upper and lower bounds on the minimax error rate that both scale like @math , providing a partial answer to a question posed by Donoho in 1992. The scaling arises from comparing the noise level to a restricted isometry constant at sparsity @math , or equivalently from comparing @math to the so-called @math -spark of the Fourier system. The proof involves new bounds on the singular values of restricted Fourier matrices, obtained in part from old techniques in complex analysis.",
"Consider the problem of recovering a measure @math supported on a lattice of span @math , when measurements are only available concerning the Fourier Transform @math at frequencies @math . If @math is much smaller than the Nyquist frequency @math and the measurements are noisy, then, in general, stable recovery of @math is impossible. In this paper it is shown that if, in addition, we know that the measure @math satisfies certain sparsity constraints, then stable recovery is possible. Say that a set has Rayleigh index less than or equal to R if in any interval of length @math there are at most R elements. Indeed, if the (unknown) support of @math is known, a priori, to have Rayleigh index at most R, then stable recovery is possible with a stability coefficient that grows at most like @math as @math . This result validates certain practical efforts, in spectroscopy, seismic prospecting, and astrono...",
"The purpose of this paper is to study the conditioning of complex Vandermonde matrices, in reference to applications such as superresolution and the problem of recovering missing samples in band-limited signals. The results include bounds for the singular values of Vandermonde matrices whose nodes are complex numbers on the unit circle. It is shown that, under certain conditions, such matrices can be quite well-conditioned, contrarily to what happens in the real case.",
"We prove sharp lower bounds for the smallest singular value of a partial Fourier matrix with arbitrary \"off the grid\" nodes (equivalently, a rectangular Vandermonde matrix with the nodes on the unit circle), in the case when some of the nodes are separated by less than the inverse bandwidth. The bound is polynomial in the reciprocal of the so-called \"super-resolution factor\", while the exponent is controlled by the maximal number of nodes which are clustered together. This generalizes previously known results for the extreme cases when all of the nodes either form a single cluster, or are completely separated. We briefly discuss possible implications for the theory and practice of super-resolution under sparsity constraints.",
"Let WN=WN(z1,z2, . . . z1) be a rectangular Vandermonde matrix of order n × N, @math with distinct nodes zj in the unit disk and @math as its (j,k) entry. Matrices of this type often arise in frequency estimation and system identification problems. In this paper, the conditioning of WN is analyzed and bounds for the spectral condition number @math are derived. The bounds depend on n, N, and the separation of the nodes. By analyzing the behavior of the bounds as functions of N, we conclude that these matrices may become well conditioned, provided the nodes are close to the unit circle but not extremely close to each other and provided the number of columns of WN is large enough. The asymptotic behavior of both the conditioning itself and the bounds is analyzed and the theoretical results arising from this analysis verified by numerical examples.",
"We prove upper and lower bounds for the spectral condition number of rectangular Vandermonde matrices with nodes on the complex unit circle. The nodes are \"off the grid\", pairs of nodes nearly collide, and the studied condition number grows linearly with the inverse separation distance. We provide reasonable sharp constants that are independent from the number of nodes as long as non-colliding nodes are well-separated.",
"Super-resolution is a fundamental task in imaging, where the goal is to extract fine-grained structure from coarse-grained measurements. Here we are interested in a popular mathematical abstraction of this problem that has been widely studied in the statistics, signal processing and machine learning communities. We exactly resolve the threshold at which noisy super-resolution is possible. In particular, we establish a sharp phase transition for the relationship between the cutoff frequency (m) and the separation (Δ). If m > 1 Δ + 1, our estimator converges to the true values at an inverse polynomial rate in terms of the magnitude of the noise. And when m",
"Super-resolution refers to the process of recovering the locations and amplitudes of a collection of point sources, represented as a discrete measure, given @math of its noisy low-frequency Fourier coefficients. The recovery process is highly sensitive to noise whenever the distance @math between the two closest point sources is less than @math . This paper studies the fundamental difficulty of super-resolution and the performance guarantees of a subspace method called MUSIC in the regime that @math . The most important quantity in our theory is the minimum singular value of the Vandermonde matrix whose nodes are specified by the source locations. Under the assumption that the nodes are closely spaced within several well-separated clumps, we derive a sharp and non-asymptotic lower bound for this quantity. Our estimate is given as a weighted @math sum, where each term only depends on the configuration of each individual clump. This implies that, as the noise increases, the super-resolution capability of MUSIC degrades according to a power law where the exponent depends on the cardinality of the largest clump. Numerical experiments validate our theoretical bounds for the minimum singular value and the resolution limit of MUSIC. When there are @math point sources located on a grid with spacing @math , the fundamental difficulty of super-resolution can be quantitatively characterized by a min-max error, which is the reconstruction error incurred by the best possible algorithm in the worst-case scenario. We show that the min-max error is closely related to the minimum singular value of Vandermonde matrices, and we provide a non-asymptotic and sharp estimate for the min-max error, where the dominant term is @math ."
]
} |
1904.09186 | 2938020605 | We consider the problem of stable recovery of sparse signals of the form @math from their spectral measurements, known in a bandwidth @math with absolute error not exceeding @math . We consider the case when at most @math nodes @math of @math form a cluster of size @math , while the rest of the nodes are well separated. Provided that @math , we show that the minimax error rate for reconstruction of the cluster nodes is of order @math , while for recovering the corresponding amplitudes @math the rate is of the order @math . Moreover, the corresponding minimax rates for the recovery of the non-clustered nodes and amplitudes are @math and @math , respectively. Our numerical experiments show that the well-known Matrix Pencil method achieves the above accuracy bounds. These results suggest that stable super-resolution is possible in much more general situations than previously thought, and have implications for analyzing stability of super-resolution algorithms in this regime. | Available studies of certain high-resolution algorithms such as MUSIC @cite_32 , ESPRIT Matrix Pencil @cite_13 , Approximate Prony Method @cite_2 and others do not provide rigorous performance guarantees in the case @math . Our numerical experiments suggest that the Matrix Pencil is optimal in the high @math regime, and we hope that our proof techniques may be used in deriving the stability limits of these and other methods in the super-resolution regime. The special case of a single cluster can be solved with optimal accuracy by polynomial homotopy methods, as described in @cite_12 , however in order to generalize this algorithm to configurations with non-cluster nodes, we need to know the optimal decimation parameter @math . | {
"cite_N": [
"@cite_13",
"@cite_32",
"@cite_12",
"@cite_2"
],
"mid": [
"2508934708",
"2963746586",
"2963604884",
"1978356474"
],
"abstract": [
"In this paper Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) is developed for spectral estimation with single-snapshot measurement. Stability and resolution analysis with performance guarantee for Single-Snapshot ESPRIT (SS-ESPRIT) is the main focus. In the noise-free case, exact reconstruction is guaranteed for any arbitrary set of frequencies as long as the number of measurement data is at least twice the number of distinct frequencies to be recovered. In the presence of noise and under the assumption that the true frequencies are separated by at least two times Rayleigh's Resolution Length, an explicit error bound for frequency reconstruction is given in terms of the dynamic range and the separation of the frequencies. The separation and sparsity constraint compares favorably with those of the leading approaches to compressed sensing in the continuum.",
"Abstract This paper studies the problem of line spectral estimation in the continuum of a bounded interval with one snapshot of array measurement. The single-snapshot measurement data are turned into a Hankel data matrix which admits the Vandermonde decomposition and is suitable for the MUSIC algorithm. The MUSIC algorithm amounts to finding the null space (the noise space) of the adjoint of the Hankel matrix, forming the noise-space correlation function and identifying the s smallest local minima of the noise-space correlation as the frequency set. In the noise-free case exact reconstruction is guaranteed for any arbitrary set of frequencies as long as the number of measurement data is at least twice the number of distinct frequencies to be recovered. In the presence of noise the stability analysis shows that the perturbation of the noise-space correlation is proportional to the spectral norm of the noise matrix as long as the latter is smaller than the smallest (nonzero) singular value of the noiseless Hankel data matrix. Under the assumption that the true frequencies are separated by at least twice the Rayleigh Length (RL), the stability of the noise-space correlation is proved by means of novel discrete Ingham inequalities which provide bounds on the largest and smallest nonzero singular values of the noiseless Hankel data matrix. The numerical performance of MUSIC is tested in comparison with other algorithms such as BLO-OMP and SDP (TV-min). While BLO-OMP is the stablest algorithm for frequencies separated above 4 RL, MUSIC becomes the best performing one for frequencies separated between 2 RL and 3 RL. Also, MUSIC is more efficient than other methods. MUSIC truly shines when the frequency separation drops to 1 RL or below when all other methods fail. Indeed, the resolution length of MUSIC decreases to zero as noise decreases to zero as a power law with an exponent smaller than an upper bound established by Donoho.",
"Abstract We consider polynomial systems of Prony type, appearing in many areas of mathematics. Their robust numerical solution is considered to be difficult, especially in “near-colliding” situations. We consider a case when the structure of the system is a-priori fixed. We transform the nonlinear part of the Prony system into a Hankel-type polynomial system. Combining this representation with a recently discovered “decimation” technique, we present an algorithm which applies homotopy continuation to an appropriately chosen Hankel-type system as above. In this way, we are able to solve for the nonlinear variables of the original system with high accuracy when the data is perturbed.",
"The recovery of signal parameters from noisy sampled data is a fundamental problem in digital signal processing. In this paper, we consider the following spectral analysis problem: Let f be a real-valued sum of complex exponentials. Determine all parameters of f, i.e., all different frequencies, all coefficients, and the number of exponentials from finitely many equispaced sampled data of f. This is a nonlinear inverse problem. In this paper, we present new results on an approximate Prony method (APM) which is based on [1]. In contrast to [1], we apply matrix perturbation theory such that we can describe the properties and the numerical behavior of the APM in detail. The number of sampled data acts as regularization parameter. The first part of APM estimates the frequencies and the second part solves an overdetermined linear Vandermonde-type system in a stable way. We compare the first part of APM also with the known ESPRIT method. The second part is related to the nonequispaced fast Fourier transform (NFFT). Numerical experiments show the performance of our method."
]
} |
1904.09435 | 2937409658 | Equipping social and service robots with the ability to perceive human emotional intensities during an interaction is in increasing demand. Most of existing work focuses on determining which emotion(s) participants are expressing from facial expressions but largely overlooks the emotional intensities spontaneously revealed by other social cues, especially body languages. In this paper, we present a real-time method for robots to capture fluctuations of participants' emotional intensities from their body poses. Unlike conventional joint-position-based approaches, our method adopts local joint transformations as pose descriptors which are invariant to subject body differences as well as the pose sensor positions. In addition, we use a Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) architecture to take the specific emotion context into account when estimating emotional intensities from body poses. The dataset evaluation suggests that the proposed method is effective and performs better than baseline method on the test dataset. Also, a series of succeeding field tests on a physical robot demonstrates that the proposed method effectively estimates subjects emotional intensities in real-time. Furthermore, the robot equipped with our method is perceived to be more emotion-sensitive and more emotionally intelligent. | Emotional intensity is defined as the psychological states of being activated by some evoking stimulus @cite_16 , which can be measured by bio-electric signals, such as heart rate, blood pressure, skin conductance etc @cite_33 @cite_10 @cite_31 . For example, Kulic and Croft leveraged heart rate, perspiration rate, and facial muscle contraction to estimate participants' emotional intensities in terms of anxiety, calm and surprise towards robot arm motions @cite_20 ; Swangnetr and Kaber used heart rate and galvanic skin response signals to estimate the intensity of joy and excitement for the elderly patients during medical Patient-Robot Interactions @cite_0 . Saulnier exploited the natural muscle tensions to estimate participants' stress levels for domestic robots @cite_25 . However, all these studies require putting sensor on participants, e.g., via special wearable devices @cite_19 or electro-physiological monitoring devices @cite_20 @cite_6 , it may be difficult and sometimes impossible to deploy such biological methods for social or service robots in real-time HRI. | {
"cite_N": [
"@cite_33",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_31",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_20"
],
"mid": [
"1964280755",
"2111438996",
"2019771222",
"2036081755",
"1977308106",
"2092804630",
"",
"2011606949",
"2000982104"
],
"abstract": [
"While the reasons underlying musical emotions are unclear, music is nevertheless a powerful elicitor of emotion, and as such, may induce autonomic nervous system responses. One typical measure of this neural pathway is the skin conductance response (SCR). This response generally depends upon stimulus arousal, one of the two motivational determinants of emotion. The objective of the present study was to verify whether emotional reactions to music elicit such event-related autonomic responses. To this aim, four musical emotions varying in arousal were employed: fear, happiness, sadness and peacefulness. SCRs were found to be greater with the two more stimulating emotions, fear and happiness, as compared to the two more relaxing emotions, sadness and peacefulness (P , 0:05). In addition, subjects’ ratings of the emotional clarity for each excerpt did not parallel the corresponding SCRs magnitudes. The results show that SCRs can be evoked and modulated by musical emotional arousal, but are not sensitive to emotional clarity. While several studies have been performed with visual scenes and environmental sounds, the present study brings similar evidence from the musical domain. q 2002 Elsevier Science Ireland Ltd. All rights reserved.",
"Investigation into robot-assisted intervention for children with autism spectrum disorder (ASD) has gained momentum in recent years. Therapists involved in interventions must overcome the communication impairments generally exhibited by children with ASD by adeptly inferring the affective cues of the children to adjust the intervention accordingly. Similarly, a robot must also be able to understand the affective needs of these children-an ability that the current robot-assisted ASD intervention systems lack-to achieve effective interaction that addresses the role of affective states in human-robot interaction and intervention practice. In this paper, we present a physiology-based affect-inference mechanism for robot-assisted intervention where the robot can detect the affective states of a child with ASD as discerned by a therapist and adapt its behaviors accordingly. This paper is the first step toward developing ldquounderstandingrdquo robots for use in future ASD intervention. Experimental results with six children with ASD from a proof-of-concept experiment (i.e., a robot-based basketball game) are presented. The robot learned the individual liking level of each child with regard to the game configuration and selected appropriate behaviors to present the task at his her preferred liking level. Results show that the robot automatically predicted individual liking level in real time with 81.1 accuracy. This is the first time, to our knowledge, that the affective states of children with ASD have been detected via a physiology-based affect recognition technique in real time. This is also the first time that the impact of affect-sensitive closed-loop interaction between a robot and a child with ASD has been demonstrated experimentally.",
"Due to a major shortage of nurses in the U.S., future healthcare service robots are expected to be used in tasks involving direct interaction with patients. Consequently, there is a need to design nursing robots with the capability to detect and respond to patient emotional states and to facilitate positive experiences in healthcare. The objective of this study was to develop a new computational algorithm for accurate patient emotional state classification in interaction with nursing robots during medical service. A simulated medicine delivery experiment was conducted at two nursing homes using a robot with different human-like features. Physiological signals, including heart rate (HR) and galvanic skin response (GSR), as well as subjective ratings of valence (happy-unhappy) and arousal (excited-bored) were collected on elderly residents. A three-stage emotional state classification algorithm was applied to these data, including: (1) physiological feature extraction; (2) statistical-based feature selection; and (3) a machine-learning model of emotional states. A pre-processed HR signal was used. GSR signals were nonstationary and noisy and were further processed using wavelet analysis. A set of wavelet coefficients, representing GSR features, was used as a basis for current emotional state classification. Arousal and valence were significantly explained by statistical features of the HR signal and GSR wavelet features. Wavelet-based de-noising of GSR signals led to an increase in the percentage of correct classifications of emotional states and clearer relationships among the physiological response and arousal and valence. The new algorithm may serve as an effective method for future service robot real-time detection of patient emotional states and behavior adaptation to promote positive healthcare experiences.",
"",
"Differences in blood pressure associated with reported happiness, anger, and anxiety are examined among 90 borderline hypertensives during 24-hr blood pressure monitoring. There were 1152 individual ambulatory blood pressure readings for which subjects classified their emotional state as happy (n =",
"What is the structure of emotion? Emotion is too broad a class of events to be a single scientific category, and no one structure suffices. As an illustration, core affect is distinguished from prototypical emotional episode. Core affect refers to consciously accessible elemental processes of pleasure and activation, has many causes, and is always present. Its structure involves two bipolar dimensions. Prototypical emotional episode refers to a complex process that unfolds over time, involves causally connected subevents (antecedent; appraisal; physiological, affective, and cognitive changes; behavioral response; self-categorization), has one perceived cause, and is rare. Its structure involves categories (anger, fear, shame, jealousy, etc.) vertically organized as a fuzzy hierarchy and horizontally organized as part of a circumplex.",
"",
"Several emerging computer devices read bio-electrical signals (e.g., electro-corticographic signals, skin biopotential or facial muscle tension) and translate them into computer- understandable input. We investigated how one low-cost commercially-available device could be used to control a domestic robot. First, we used the device to issue direct motion commands; while we could control the device somewhat, it proved difficult to do reliably. Second, we interpreted one class of signals as suggestive of emotional stress, and used that as an emotional parameter to influence (but not directly control) robot behaviour. In this case, the robot would react to human stress by staying out of the person's way. Our work suggests that affecting behaviour may be a reasonable way to leverage such devices.",
"In order for humans and robots to interact in an effective and intuitive manner, robots must obtain information about the human affective state in response to the robot's actions. This secondary mode of interactive communication is hypothesized to permit a more natural collaboration, similar to the \"body language\" interaction between two cooperating humans. This paper describes the implementation and validation of a hidden Markov model (HMM) for estimating human affective state in real time, using robot motions as the stimulus. Inputs to the system are physiological signals such as heart rate, perspiration rate, and facial muscle contraction. Affective state was estimated using a two- dimensional valence-arousal representation. A robot manipulator was used to generate motions expected during human-robot interaction, and human subjects were asked to report their response to these motions. The human physiological response was also measured. Robot motions were generated using both a nominal potential field planner and a recently reported safe motion planner that minimizes the potential collision forces along the path. The robot motions were tested with 36 subjects. This data was used to train and validate the HMM model. The results of the HMM affective estimation are also compared to a previously implemented fuzzy inference engine."
]
} |
1904.09191 | 2938269852 | Learning generalizable skills in robotic manipulation has long been challenging due to real-world sized observation and action spaces. One method for addressing this problem is attention focus -- the robot learns where to attend its sensors and irrelevant details are ignored. However, these methods have largely not caught on due to the difficulty of learning a good attention policy and the added partial observability induced by a narrowed window of focus. This article addresses the first issue by constraining gazes to a spatial hierarchy. For the second issue, we identify a case where the partial observability induced by attention does not prevent Q-learning from finding an optimal policy. We conclude with real-robot experiments on challenging pick-place tasks demonstrating the applicability of the approach. | Like several others, we apply RL techniques to the problem of robotic manipulation (see above-mentioned @cite_15 @cite_14 @cite_16 @cite_33 @cite_31 and survey @cite_11 ). RL is appealing for robotic control for several reasons. First, several algorithms (e.g., @cite_2 @cite_26 ) do not require a complete model of the environment. This is of particular relevance to robotics, where the environment is dynamic and difficult to describe exactly. Additionally, observations are often encoded as camera or depth sensor images. Deep Q-Networks (DQN) demonstrated an agent learning difficult tasks (Atari games) where observations were image sequences and actions were discrete @cite_7 . An alternative to DQN that can handle continuous action spaces are actor-critic methods like DDPG @cite_25 . Finally, RL -- which has its roots in optimal control -- provides tools for the analysis of learning optimal behavior (e.g. @cite_5 @cite_6 @cite_29 ), which we refer to in . | {
"cite_N": [
"@cite_31",
"@cite_14",
"@cite_26",
"@cite_33",
"@cite_7",
"@cite_29",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_25",
"@cite_11"
],
"mid": [
"2964001908",
"2963484919",
"134707420",
"2963166883",
"2145339207",
"2397240726",
"2165131254",
"101653561",
"",
"2601066903",
"2962759351",
"2173248099",
"1977655452"
],
"abstract": [
"Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task. The video presenting our experiments is available at https: goo.gl SMrQnI.",
"",
"",
"We propose a novel formulation of robotic pick and place as a deep reinforcement learning (RL) problem. Whereas most deep RL approaches to robotic manipulation frame the problem in terms of low level states and actions, we propose a more abstract formulation. In this formulation, actions are target reach poses for the hand and states are a history of such reaches. We show this approach can solve a challenging class of pick-place and regrasping problems where the exact geometry of the objects to be handled is unknown. The only information our method requires is: 1) the sensor perception available to the robot at test time; 2) prior knowledge of the general class of objects for which the system was trained. We evaluate our method using objects belonging to two different categories, mugs and bottles, both in simulation and on real hardware. Results show a major improvement relative to a shape primitives baseline.",
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.",
"State abstraction (or state aggregation) has been extensively studied in the fields of artificial intelligence and operations research. Instead of working in the ground state space, the decision maker usually finds solutions in the abstract state space much faster by treating groups of states as a unit by ignoring irrelevant state information. A number of abstractions have been proposed and studied in the reinforcement-learning and planning literatures, and positive and negative results are known. We provide a unified treatment of state abstraction for Markov decision processes. We study five particular abstraction schemes, some of which have been proposed in the past in different forms, and analyze their usability for planning and learning.",
"Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD(λ) algorithm of Sutton (1988) and the Q-learning algorithm of Watkins (1989), can be motivated heuristically as approximations to dynamic programming (DP). In this paper we provide a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem. The theorem establishes a general class of convergent algorithms to which both TD(λ) and Q-learning belong.",
"",
"",
"We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.",
"In this paper, we explore deep reinforcement learning algorithms for vision-based robotic grasping. Model-free deep reinforcement learning (RL) has been successfully applied to a range of challenging environments, but the proliferation of algorithms makes it difficult to discern which particular approach would be best suited for a rich, diverse task like grasping. To answer this question, we propose a simulated benchmark for robotic grasping that emphasizes off-policy learning and generalization to unseen objects. Off-policy learning enables utilization of grasping data over a wide variety of objects, and diversity is important to enable the method to generalize to new objects that were not seen during training. We evaluate the benchmark tasks against a variety of Q-function estimation methods, a method previously proposed for robotic grasping with deep neural network models, and a novel approach based on a combination of Monte Carlo return estimation and an off-policy correction. Our results indicate that several simple methods provide a surprisingly strong competitor to popular algorithms such as double Q-learning, and our analysis of stability sheds light on the relative tradeoffs between the algorithms 11Accompanying video: https: goo.gl pyMd6p.",
"We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value-function-based and policy-search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research."
]
} |
1904.09191 | 2938269852 | Learning generalizable skills in robotic manipulation has long been challenging due to real-world sized observation and action spaces. One method for addressing this problem is attention focus -- the robot learns where to attend its sensors and irrelevant details are ignored. However, these methods have largely not caught on due to the difficulty of learning a good attention policy and the added partial observability induced by a narrowed window of focus. This article addresses the first issue by constraining gazes to a spatial hierarchy. For the second issue, we identify a case where the partial observability induced by attention does not prevent Q-learning from finding an optimal policy. We conclude with real-robot experiments on challenging pick-place tasks demonstrating the applicability of the approach. | Our approach is inspired by models of visual attention. Following the early work of Whitehead and Ballard @cite_12 , we distinguish overt actions (which directly affect change to the environment) from perceptual actions (which retrieve information). Similar to their agent model, our abstract robot has a virtual sensor which can be used to focus attention on task-relevant parts of the scene. The present work updates their methodology to address more realistic problems, and we extend their analysis by describing a situation where an optimal policy can be learned even in the presence of perceptual aliasing'' (i.e. partial observability). Attention mechanisms have also been used with artificial neural networks to identify an object of interest in a 2D image @cite_20 @cite_34 @cite_27 @cite_13 . Our situation is more complex in that we identify 6-DoF poses of the robot's hand. Improved grasp performance has been observed by active control of the robot's sensor @cite_35 @cite_22 . These methods attempt to identify the best sensor placement for grasp success. In contrast, our robot learns to control a virtual sensor for the purpose of reducing the complexity of action selection and learning. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_27",
"@cite_34",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"2963785592",
"2967729293",
"2147527908",
"2141399712",
"603908379",
"2052117683",
"2155844728"
],
"abstract": [
"In grasp detection, the robot estimates the position and orientation of potential grasp configurations directly from sensor data. This paper explores the relationship between viewpoint and grasp detection performance. Specifically, we consider the scenario where the approximate position and orientation of a desired grasp is known in advance and we want to select a viewpoint that will enable a grasp detection algorithm to localize it more precisely and with higher confidence. Our main findings are that the right viewpoint can dramatically increase the number of detected grasps and the classification accuracy of the top-n detections. We use this insight to create a viewpoint selection algorithm and compare it against a random viewpoint selection strategy and a strategy that views the desired grasp head-on. We find that the head-on strategy and our proposed viewpoint selection strategy can improve grasp success rates on a real robot by 8 and 4 , respectively. Moreover, we find that the combination of the two methods can improve grasp success rates by as much as 12 .",
"Camera viewpoint selection is an important aspect of visual grasp detection, especially in clutter where many occlusions are present. Where other approaches use a static camera position or fixed data collection routines, our Multi-View Picking (MVP) controller uses an active perception approach to choose informative viewpoints based directly on a distribution of grasp pose estimates in real time, reducing uncertainty in the grasp poses caused by clutter and occlusions. In trials of grasping 20 objects from clutter, our MVP controller achieves 80 grasp success, outperforming a single-viewpoint grasp detector by 12 . We also show that our approach is both more accurate and more efficient than approaches which consider multiple fixed viewpoints. Code is available at https: github.com dougsm mvp_grasp",
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.",
"We describe a model based on a Boltzmann machine with third-order connections that can learn how to accumulate information about a shape over several fixations. The model uses a retina that only has enough high resolution pixels to cover a small area of the image, so it must decide on a sequence of fixations and it must combine the \"glimpse\" at each fixation with the location of the fixation before integrating the information with information from other glimpses of the same object. We evaluate this model on a synthetic dataset and two image classification datasets, showing that it can perform at least as well as a model trained on whole images.",
"Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.",
"This article considers adaptive control architectures that integrate active sensory-motor systems with decision systems based on reinforcement learning. One unavoidable consequence of active perception is that the agent's internal representation often confounds external world states. We call this phoenomenon perceptual aliasing and show that it destabilizes existing reinforcement learning algorithms with respect to the optimal decision policy. We then describe a new decision system that overcomes these difficulties for a restricted class of decision problems. The system incorporates a perceptual subcycle within the overall decision cycle and uses a modified learning algorithm to suppress the effects of perceptual aliasing. The result is a control architecture that learns not only how to solve a task but also where to focus its visual attention in order to collect necessary sensory information.",
"Recent eye tracking studies in natural tasks suggest that there is a tight link between eye movements and goal directed motor actions. However, most existing models of human eye movements provide a bottom up account that relates visual attention to attributes of the visual scene. The purpose of this paper is to introduce a new model of human eye movements that directly ties eye movements to the ongoing demands of behavior. The basic idea is that eye movements serve to reduce uncertainty about environmental variables that are task relevant. A value is assigned to an eye movement by estimating the expected cost of the uncertainty that will result if the movement is not made. If there are several candidate eye movements, the one with the highest expected value is chosen. The model is illustrated using a humanoid graphic figure that navigates on a sidewalk in a virtual urban environment. Simulations show our protocol is superior to a simple round robin scheduling mechanism."
]
} |
1904.09286 | 2940024477 | Even as pre-trained language encoders such as BERT are shared across many tasks, the output layers of question answering and text classification models are significantly different. Span decoders are frequently used for question answering and fixed-class, classification layers for text classification. We show that this distinction is not necessary, and that both can be unified as span extraction. A unified, span-extraction approach leads to superior or comparable performance in multi-task learning, low-data and supplementary supervised pretraining experiments on several text classification and question answering benchmarks. | The use of pre-trained encoders for transfer learning in NLP dates back to @cite_2 @cite_3 but has had a resurgence in the recent past. BERT employs the recently proposed Transformer layers in conjunction with a masked language modeling objective as a pre-trained sentence encoder. Prior to BERT, contextualized word vectors were pre-trained using machine translation data and transferred to text classification and question answering tasks. ELMO improved contextualized word vectors by using a language modeling objective instead of machine translation. ULMFit and GPT showed how traditional, causal language models could be fine-tuned directly for a specific task, and GPT-2 showed that such language models can indirectly learn tasks like machine translation, question answering, and summarization. | {
"cite_N": [
"@cite_3",
"@cite_2"
],
"mid": [
"2427527485",
"2117130368"
],
"abstract": [
"We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0 , a significant improvement over a simple baseline (20 ). However, human performance (86.8 ) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL",
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance."
]
} |
1904.09366 | 2938495708 | Optimal planning with respect to learned neural network (NN) models in continuous action and state spaces using mixed-integer linear programming (MILP) is a challenging task for branch-and-bound solvers due to the poor linear relaxation of the underlying MILP model. For a given set of features, potential heuristics provide an efficient framework for computing bounds on cost (reward) functions. In this paper, we model the problem of finding an optimal potential heuristic for learned NN models as a bilevel program, and solve it using a novel finite-time constraint generation algorithm. We then strengthen the linear relaxation of the underlying MILP model by introducing constraints to bound the reward function based on the precomputed reward potentials. Experimentally, we show that our algorithm efficiently computes reward potentials for learned NN models, and the overhead of computing reward potentials is justified by the overall strengthening of the underlying MILP model for the task of planning over long horizons. | In this paper, we have focused on the important problem of improving the efficiency of B &B solvers for optimal planning with learned NN transition models in continuous action and state spaces. Parallel to this work, planning and decision making in discrete action and state spaces @cite_2 @cite_12 @cite_0 , verification of learned NNs @cite_13 @cite_18 @cite_10 @cite_4 , robustness evaluation of learned NNs @cite_15 and defenses to adversarial attacks for learned NNs @cite_9 have been studied with the focus of solving very similar decision making problems. For example, the verification problem solved by Reluplex @cite_13 Reluplex @cite_13 is a SMT-based learned NN verification software. is very similar to the planning problem solved by HD-MILP-Plan @cite_7 without the objective function and horizon @math . Interestingly, the verification problem can also be modeled as an optimization problem @cite_16 and potentially benefit from the findings presented in this paper. For future work, we plan to explore how our findings in this work translate to solving other important tasks for learned neural networks. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_10",
"@cite_9",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_12"
],
"mid": [
"2963054787",
"2963673089",
"2741534749",
"2543296129",
"2766462876",
"2903535193",
"2276412021",
"2807040120",
"2950183737",
"2594877703",
"2808541151"
],
"abstract": [
"We present an approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function. Such networks are often used in deep learning and have been shown to be hard to verify for modern satisfiability modulo theory (SMT) and integer linear programming (ILP) solvers.",
"",
"",
"Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions that would result in the same class being assigned by a human, and define safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness.",
"We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations (on the training data; for previously unseen examples, the approach will be guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well). The basic idea of the approach is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a toy 2D robust classification task, and on a simple convolutional architecture applied to MNIST, where we produce a classifier that provably has less than 8.4 test error for any adversarial attack with bounded @math norm less than @math . This represents the largest verified network that we are aware of, and we discuss future challenges in scaling the approach to much larger domains.",
"In this paper, we leverage the efficiency of Binarized Neural Networks (BNNs) to learn complex state transition models of planning domains with discretized factored state and action spaces. In order to directly exploit this transition structure for planning, we present two novel compilations of the learned factored planning problem with BNNs based on reductions to Weighted Partial Maximum Boolean Satisfiability (FD-SAT-Plan+) as well as Binary Linear Programming (FD-BLP-Plan+). Theoretically, we show that our SAT-based Bi-Directional Neuron Activation Encoding is asymptotically the most compact encoding in the literature and maintains the generalized arc-consistency property through unit propagation -- an important property that facilitates efficiency in SAT solvers. Experimentally, we validate the computational efficiency of our Bi-Directional Neuron Activation Encoding in comparison to an existing neuron activation encoding and demonstrate the effectiveness of learning complex transition models with BNNs. We test the runtime efficiency of both FD-SAT-Plan+ and FD-BLP-Plan+ on the learned factored planning problem showing that FD-SAT-Plan+ scales better with increasing BNN size and complexity. Finally, we present a finite-time incremental constraint generation algorithm based on generalized landmark constraints to improve the planning accuracy of our encodings through simulated or real-world interaction.",
"This paper discusses a new method to perform propagation over a (two-layer, feed-forward) Neural Network embedded in a Constraint Programming model. The method is meant to be employed in Empirical Model Learning, a technique designed to enable optimal decision making over systems that cannot be modeled via conventional declarative means. The key step in Empirical Model Learning is to embed a Machine Learning model into a combinatorial model. It has been showed that Neural Networks can be embedded in a Constraint Programming model by simply encoding each neuron as a global constraint, which is then propagated individually. Unfortunately, this decomposition approach may lead to weak bounds. To overcome such limitation, we propose a new network-level propagator based on a non-linear Lagrangian relaxation that is solved with a subgradient algorithm. The method proved capable of dramatically reducing the search tree size on a thermal-aware dispatching problem on multicore CPUs. The overhead for optimizing the Lagrangian multipliers is kept within a reasonable level via a few simple techniques. This paper is an extended version of [27], featuring an improved structure, a new filtering technique for the network inputs, a set of overhead reduction techniques, and a thorough experimentation.",
"Neural networks have demonstrated considerable success on a wide variety of real-world problems. However, neural networks can be fooled by adversarial examples – slightly perturbed inputs that are misclassified with high confidence. Verification of networks enables us to gauge their vulnerability to such adversarial examples. We formulate verification of piecewise-linear neural networks as a mixed integer program. Our verifier finds minimum adversarial distortions two to three orders of magnitude more quickly than the state-of-the-art. We achieve this via tight formulations for non-linearities, as well as a novel presolve algorithm that makes full use of all information available. The computational speedup enables us to verify properties on convolutional networks with an order of magnitude more ReLUs than had been previously verified by any complete verifier, and we determine for the first time the exact adversarial accuracy of an MNIST classifier to perturbations with bounded l[infinity] norm e = 0:1. On this network, we find an adversarial example for 4.38 of samples, and a certificate of robustness for the remainder. Across a variety of robust training procedures, we are able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack for every network.",
"The success of Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network (NN) models. Despite the reputation of learned NN models to behave as black boxes and the theoretical hardness of proving their properties, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure and taking insights from formal methods such as Satisifiability Modulo Theory. These methods are however still far from scaling to realistic neural networks. To facilitate progress on this crucial area, we make two key contributions. First, we present a unified framework that encompasses previous methods. This analysis results in the identification of new methods that combine the strengths of multiple existing approaches, accomplishing a speedup of two orders of magnitude compared to the previous state of the art. Second, we propose a new data set of benchmarks which includes a collection of previously released testcases. We use the benchmark to provide the first experimental comparison of existing algorithms and identify the factors impacting the hardness of verification problems.",
"Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.",
""
]
} |
1904.09092 | 2939571759 | Semantic segmentation, a pixel-level vision task, is rapidly developed by using convolutional neural networks (CNNs). Training CNNs requires a large amount of labeled data, but manually annotating data is difficult. For emancipating manpower, in recent years, some synthetic datasets are released. However, they are still different from real scenes, which causes that training a model on the synthetic data (source domain) cannot achieve a good performance on real urban scenes (target domain). In this paper, we propose a weakly supervised adversarial domain adaptation to improve the segmentation performance from synthetic data to real scenes, which consists of three deep neural networks. A detection and segmentation (DS) model focuses on detecting objects and predicting segmentation map; a pixel-level domain classifier (PDC) tries to distinguish the image features from which domains; and an object-level domain classifier (ODC) discriminates the objects from which domains and predicts object classes. PDC and ODC are treated as the discriminators, and DS is considered as the generator. By the adversarial learning, DS is supposed to learn domain-invariant features. In experiments, our proposed method yields the new record of mIoU metric in the same problem. | In 2014, fully convolutional network (FCN) proposed by Long @cite_10 achieves a significant improvement in the field of some pixel-wise tasks (such as semantic segmentation, saliency detection, crowd density estimation and so on), which is a fully supervised method. After that, more and more methods @cite_26 @cite_17 @cite_36 @cite_21 @cite_13 @cite_20 based on FCNs are presented. Zheng @cite_26 propose an interpretation of dense conditional random fields as recurrent neural networks, which is appended to the top of FCN. Seg-net @cite_17 and U-net @cite_36 develop a symmetrical encoder-decoder architecture to prompt the performance output maps. Yu and Loltum @cite_21 propose a dilated convolution operation to aggregate multi-scale contextual information. Zhao @cite_13 design a pyramid pooling module in FCN to exploit the capability of global context information. He @cite_2 propose a supervised multi-task learning for instance segmentation, which does not segment the background objects. Wang @cite_20 present a FCN to combine RGB images and contour information for road region segmentation. | {
"cite_N": [
"@cite_13",
"@cite_26",
"@cite_36",
"@cite_21",
"@cite_2",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"2560023338",
"2124592697",
"1901129140",
"2286929393",
"",
"1903029394",
"2764012408",
"2963881378"
],
"abstract": [
"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.",
"Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .",
"State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.",
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"Road detection from the perspective of moving vehicles is a challenging issue in autonomous driving. Recently, many deep learning methods spring up for this task, because they can extract high-level local features to find road regions from raw RGB data, such as convolutional neural networks and fully convolutional networks (FCNs). However, how to detect the boundary of road accurately is still an intractable problem. In this paper, we propose siamesed FCNs (named “s-FCN-loc”), which is able to consider RGB-channel images, semantic contours, and location priors simultaneously to segment the road region elaborately. To be specific, the s-FCN-loc has two streams to process the original RGB images and contour maps, respectively. At the same time, the location prior is directly appended to the siamesed FCN to promote the final detection performance. Our contributions are threefold: 1) An s-FCN-loc is proposed that learns more discriminative features of road boundaries than the original FCN to detect more accurate road regions. 2) Location prior is viewed as a type of feature map and directly appended to the final feature map in s-FCN-loc to promote the detection performance effectively, which is easier than other traditional methods, namely, different priors for different inputs (image patches). 3) The convergent speed of training s-FCN-loc model is 30 faster than the original FCN because of the guidance of highly structured contours. The proposed approach is evaluated on the KITTI road detection benchmark and one-class road detection data set, and achieves a competitive result with the state of the arts.",
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet ."
]
} |
1904.09092 | 2939571759 | Semantic segmentation, a pixel-level vision task, is rapidly developed by using convolutional neural networks (CNNs). Training CNNs requires a large amount of labeled data, but manually annotating data is difficult. For emancipating manpower, in recent years, some synthetic datasets are released. However, they are still different from real scenes, which causes that training a model on the synthetic data (source domain) cannot achieve a good performance on real urban scenes (target domain). In this paper, we propose a weakly supervised adversarial domain adaptation to improve the segmentation performance from synthetic data to real scenes, which consists of three deep neural networks. A detection and segmentation (DS) model focuses on detecting objects and predicting segmentation map; a pixel-level domain classifier (PDC) tries to distinguish the image features from which domains; and an object-level domain classifier (ODC) discriminates the objects from which domains and predicts object classes. PDC and ODC are treated as the discriminators, and DS is considered as the generator. By the adversarial learning, DS is supposed to learn domain-invariant features. In experiments, our proposed method yields the new record of mIoU metric in the same problem. | Recently, some weakly-supervised methods @cite_41 @cite_37 @cite_23 @cite_30 @cite_22 are presented to save the costs of annotating ground truth. Papandreou @cite_41 adopt on-line EM (Expectation-Maximization) methods training segmentation model from image-level and bounding-box labels. @cite_37 @cite_23 apply a progressively learning strategy to train DCNN from the image-level images. Souly @cite_30 apply a Generative Adversarial Networks (GANs) in which a generator network provides extra training data to a classifier. Oh @cite_22 exploit the saliency features as additional knowledge and mine prior information on the object extent and image statistics to segment the object regions. It is noted that the above mentioned weakly-supervised methods do not focus on labeling of full scenes. They aim to segment the salient foreground objects in the simple scenes. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_22",
"@cite_41",
"@cite_23"
],
"mid": [
"2778764040",
"2133515615",
"2585521554",
"1529410181",
"2606129492"
],
"abstract": [
"Semantic segmentation has been a long standing challenging task in computer vision. It aims at assigning a label to each image pixel and needs a significant number of pixel-level annotated data, which is often unavailable. To address this lack of annotations, in this paper, we leverage, on one hand, a massive amount of available unlabeled or weakly labeled data, and on the other hand, non-real images created through Generative Adversarial Networks. In particular, we propose a semi-supervised framework – based on Generative Adversarial Networks (GANs) – which consists of a generator network to provide extra training examples to a multi-class classifier, acting as discriminator in the GAN framework, that assigns sample a label y from the K possible classes or marks it as a fake sample (extra class). The underlying idea is that adding large fake visual data forces real samples to be close in the feature space, which, in turn, improves multiclass pixel classification. To ensure a higher quality of generated images by GANs with consequently improved pixel classification, we extend the above framework by adding weakly annotated data, i.e., we provide class level information to the generator. We test our approaches on several challenging benchmarking visual datasets, i.e. PASCAL, SiftFLow, Stanford and CamVid, achieving competitive performance compared to state-of-the-art semantic segmentation methods.",
"Recently, significant improvement has been made on semantic object segmentation due to the development of deep convolutional neural networks (DCNNs). Training such a DCNN usually relies on a large number of images with pixel-level segmentation masks, and annotating these images is very costly in terms of both finance and human effort. In this paper, we propose a simple to complex (STC) framework in which only image-level annotations are utilized to learn DCNNs for semantic segmentation. Specifically, we first train an initial segmentation network called Initial-DCNN with the saliency maps of simple images (i.e., those with a single category of major object(s) and clean background). These saliency maps can be automatically obtained by existing bottom-up salient object detection techniques, where no supervision information is needed. Then, a better network called Enhanced-DCNN is learned with supervision from the predicted segmentation masks of simple images based on the Initial-DCNN as well as the image-level annotations. Finally, more pixel-level segmentation masks of complex images (two or more categories of objects with cluttered background), which are inferred by using Enhanced-DCNN and image-level annotations, are utilized as the supervision information to learn the Powerful-DCNN for semantic segmentation. Our method utilizes 40K simple images from Flickr.com and 10K complex images from PASCAL VOC for step-wisely boosting the segmentation network. Extensive experimental results on PASCAL VOC 2012 segmentation benchmark well demonstrate the superiority of the proposed STC framework compared with other state-of-the-arts.",
"There have been remarkable improvements in the semantic labelling task in the recent years. However, the state of the art methods rely on large-scale pixel-level annotations. This paper studies the problem of training a pixel-wise semantic labeller network from image-level annotations of the present object classes. Recently, it has been shown that high quality seeds indicating discriminative object regions can be obtained from image-level labels. Without additional information, obtaining the full extent of the object is an inherently ill-posed problem due to co-occurrences. We propose using a saliency model as additional information and hereby exploit prior knowledge on the object extent and image statistics. We show how to combine both information sources in order to recover 80 of the fully supervised performance – which is the new state of the art in weakly supervised training for pixel-wise semantic labelling.",
"Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL",
"We propose a weakly supervised semantic segmentation algorithm that uses image tags for supervision. We apply the tags in queries to collect three sets of web images, which encode the clean foregrounds, the common backgrounds, and realistic scenes of the classes. We introduce a novel three-stage training pipeline to progressively learn semantic segmentation models. We first train and refine a class-specific shallow neural network to obtain segmentation masks for each class. The shallow neural networks of all classes are then assembled into one deep convolutional neural network for end-to-end training and testing. Experiments show that our method notably outperforms previous state-of-the-art weakly supervised semantic segmentation approaches on the PASCAL VOC 2012 segmentation benchmark. We further apply the class-specific shallow neural networks to object segmentation and obtain excellent results."
]
} |
1904.09092 | 2939571759 | Semantic segmentation, a pixel-level vision task, is rapidly developed by using convolutional neural networks (CNNs). Training CNNs requires a large amount of labeled data, but manually annotating data is difficult. For emancipating manpower, in recent years, some synthetic datasets are released. However, they are still different from real scenes, which causes that training a model on the synthetic data (source domain) cannot achieve a good performance on real urban scenes (target domain). In this paper, we propose a weakly supervised adversarial domain adaptation to improve the segmentation performance from synthetic data to real scenes, which consists of three deep neural networks. A detection and segmentation (DS) model focuses on detecting objects and predicting segmentation map; a pixel-level domain classifier (PDC) tries to distinguish the image features from which domains; and an object-level domain classifier (ODC) discriminates the objects from which domains and predicts object classes. PDC and ODC are treated as the discriminators, and DS is considered as the generator. By the adversarial learning, DS is supposed to learn domain-invariant features. In experiments, our proposed method yields the new record of mIoU metric in the same problem. | There are two main streams to study domain adaptation. Some methods @cite_31 @cite_9 @cite_25 @cite_14 @cite_8 attempt to minimize the domain gap via adversarial training. @cite_31 @cite_9 @cite_25 propose a Domain-Adversarial Neural Network, which minimizes the domain classification loss. Muhammad @cite_14 propose an DRCN to reconstruct target domain images by optimizing a domain classifier. Tzeng @cite_8 present a generalized framework for adversarial adaptation, which help us understand the benefits and key ideas from GANs-based methods. | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_31",
"@cite_25"
],
"mid": [
"2478454054",
"2593768305",
"2963826681",
"2214409633",
"1731081199"
],
"abstract": [
"In this paper, we propose a novel unsupervised domain adaptation algorithm based on deep learning for visual object recognition. Specifically, we design a new model called Deep Reconstruction-Classification Network (DRCN), which jointly learns a shared encoding representation for two tasks: (i) supervised classification of labeled source data, and (ii) unsupervised reconstruction of unlabeled target data. In this way, the learnt representation not only preserves discriminability, but also encodes useful information from the target domain. Our new DRCN model can be optimized by using backpropagation similarly as the standard neural networks.",
"Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.",
"Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.",
"We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application."
]
} |
1904.09092 | 2939571759 | Semantic segmentation, a pixel-level vision task, is rapidly developed by using convolutional neural networks (CNNs). Training CNNs requires a large amount of labeled data, but manually annotating data is difficult. For emancipating manpower, in recent years, some synthetic datasets are released. However, they are still different from real scenes, which causes that training a model on the synthetic data (source domain) cannot achieve a good performance on real urban scenes (target domain). In this paper, we propose a weakly supervised adversarial domain adaptation to improve the segmentation performance from synthetic data to real scenes, which consists of three deep neural networks. A detection and segmentation (DS) model focuses on detecting objects and predicting segmentation map; a pixel-level domain classifier (PDC) tries to distinguish the image features from which domains; and an object-level domain classifier (ODC) discriminates the objects from which domains and predicts object classes. PDC and ODC are treated as the discriminators, and DS is considered as the generator. By the adversarial learning, DS is supposed to learn domain-invariant features. In experiments, our proposed method yields the new record of mIoU metric in the same problem. | Other methods @cite_15 @cite_7 @cite_19 @cite_0 adopt the Maximum Mean Discrepancy (MMD) @cite_33 to alleviate domain shift. MMD measures the difference between features extracted from each domain. Tzeng @cite_15 computes the MMD loss at one layer and Long @cite_7 minimizing MMD losses at multi-layer Deep Adaptation Network. Bousmalis @cite_19 propose a Domain Separation Networks (DSN) to learn domain-invariant features by explicitly separating representations private to each domain. Further, Long @cite_0 combines Joint Adaptation Networks (JAN) with adversarial training strategy. | {
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_0",
"@cite_19",
"@cite_15"
],
"mid": [
"2212660284",
"2159291411",
"2964278684",
"2511131004",
"1565327149"
],
"abstract": [
"We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD).We present two distribution free tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.",
"Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multikernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.",
"Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets.",
"The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We hypothesize that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained to not only perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task."
]
} |
1904.09092 | 2939571759 | Semantic segmentation, a pixel-level vision task, is rapidly developed by using convolutional neural networks (CNNs). Training CNNs requires a large amount of labeled data, but manually annotating data is difficult. For emancipating manpower, in recent years, some synthetic datasets are released. However, they are still different from real scenes, which causes that training a model on the synthetic data (source domain) cannot achieve a good performance on real urban scenes (target domain). In this paper, we propose a weakly supervised adversarial domain adaptation to improve the segmentation performance from synthetic data to real scenes, which consists of three deep neural networks. A detection and segmentation (DS) model focuses on detecting objects and predicting segmentation map; a pixel-level domain classifier (PDC) tries to distinguish the image features from which domains; and an object-level domain classifier (ODC) discriminates the objects from which domains and predicts object classes. PDC and ODC are treated as the discriminators, and DS is considered as the generator. By the adversarial learning, DS is supposed to learn domain-invariant features. In experiments, our proposed method yields the new record of mIoU metric in the same problem. | Hoffman @cite_27 firstly propose an unsupervised domain adaptation for segmentation, which combines global and category adaptation in the adversarial learning. It effectively reduces the domain gap at the pixel level. Zhang @cite_24 adopt a curriculum-style domain adaption and predict global and local label distributions at image and superpixel levels, respectively. | {
"cite_N": [
"@cite_24",
"@cite_27"
],
"mid": [
"2963998559",
"2562192638"
],
"abstract": [
"During the last half decade, convolutional neural networks (CNNs) have triumphed over semantic segmentation, which is a core task of various emerging industrial applications such as autonomous driving and medical imaging. However, to train CNNs requires a huge amount of data, which is difficult to collect and laborious to annotate. Recent advances in computer graphics make it possible to train CNN models on photo-realistic synthetic data with computer-generated annotations. Despite this, the domain mismatch between the real images and the synthetic data significantly decreases the models’ performance. Hence we propose a curriculum-style learning approach to minimize the domain gap in semantic segmentation. The curriculum domain adaptation solves easy tasks first in order to infer some necessary properties about the target domain; in particular, the first task is to learn global label distributions over images and local distributions over landmark superpixels. These are easy to estimate because images of urban traffic scenes have strong idiosyncrasies (e.g., the size and spatial relations of buildings, streets, cars, etc.). We then train the segmentation network in such a way that the network predictions in the target domain follow those inferred properties. In experiments, our method significantly outperforms the baselines as well as the only known existing approach to the same problem.",
"Fully convolutional models for dense prediction have proven successful for a wide range of visual tasks. Such models perform well in a supervised setting, but performance can be surprisingly poor under domain shifts that appear mild to a human observer. For example, training on one city and testing on another in a different geographic region and or weather condition may result in significantly degraded performance due to pixel-level distribution shift. In this paper, we introduce the first domain adaptive semantic segmentation method, proposing an unsupervised adversarial approach to pixel prediction problems. Our method consists of both global and category specific adaptation techniques. Global domain alignment is performed using a novel semantic segmentation network with fully convolutional domain adversarial learning. This initially adapted space then enables category specific adaptation through a generalization of constrained weak learning, with explicit transfer of the spatial layout from the source to the target domains. Our approach outperforms baselines across different settings on multiple large-scale datasets, including adapting across various real city environments, different synthetic sub-domains, from simulated to real environments, and on a novel large-scale dash-cam dataset."
]
} |
1904.09090 | 2938315635 | Artificial neural networks (ANNs) have become the driving force behind recent artificial intelligence (AI) research. An important problem with implementing a neural network is the design of its architecture. Typically, such an architecture is obtained manually by exploring its hyperparameter space and kept fixed during training. This approach is both time-consuming and inefficient. Furthermore, modern neural networks often contain millions of parameters, whereas many applications require small inference models. Also, while ANNs have found great success in big-data applications, there is also significant interest in using ANNs for medium- and small-data applications that can be run on energy-constrained edge devices. To address these challenges, we propose a neural network synthesis methodology (SCANN) that can generate very compact neural networks without loss in accuracy for small and medium-size datasets. We also use dimensionality reduction methods to reduce the feature size of the datasets, so as to alleviate the curse of dimensionality. Our final synthesis methodology consists of three steps: dataset dimensionality reduction, neural network compression in each layer, and neural network compression with SCANN. We evaluate SCANN on the medium-size MNIST dataset by comparing our synthesized neural networks to the well-known LeNet-5 baseline. Without any loss in accuracy, SCANN generates a @math smaller network than the LeNet-5 Caffe model. We also evaluate the efficiency of using dimensionality reduction alongside SCANN on nine small to medium-size datasets. Using this methodology enables us to reduce the number of connections in the network by up to @math (geometric mean: @math ), with little to no drop in accuracy. We also show that our synthesis methodology yields neural networks that are much better at navigating the accuracy vs. energy efficiency space. | The high dimensionality of many datasets used in various applications of machine learning leads to the curse of dimensionality problem. Therefore, researchers have explored dimensionality reduction methods to improve the performance of machine learning models by decreasing the number of features. Traditional dimensionality reduction methods include Principal Component Analysis (PCA), Kernel PCA, Factor Analysis (FA), Independent Component Analysis (ICA), as well as Spectral Embedding methods. Some graph-based methods include Isomap @cite_29 and Maximum Variance Unfolding @cite_25 . FeatureNet @cite_13 uses community detection in small sample size datasets to map high-dimensional data to lower dimensions. Other dimensionality reduction methods include stochastic proximity embedding (SPE) @cite_14 , Linear Discriminant Analysis (LDA), and t-distributed Stochastic Neighbor Embedding (t-SNE) @cite_7 . A detailed survey of dimensionality reduction methods can be found in @cite_40 . | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_29",
"@cite_40",
"@cite_13",
"@cite_25"
],
"mid": [
"2134738108",
"2187089797",
"2001141328",
"",
"2807933484",
"198244778"
],
"abstract": [
"We introduce stochastic proximity embedding (SPE), a novel self-organizing algorithm for producing meaningful underlying dimensions from proximity data. SPE attempts to generate low-dimensional Euclidean embeddings that best preserve the similarities between a set of related observations. The method starts with an initial configuration, and iteratively refines it by repeatedly selecting pairs of objects at random, and adjusting their coordinates so that their distances on the map match more closely their respective proximities. The magnitude of these adjustments is controlled by a learning rate parameter, which decreases during the course of the simulation to avoid oscillatory behavior. Unlike classical multidimensional scaling (MDS) and nonlinear mapping (NLM), SPE scales linearly with respect to sample size, and can be applied to very large data sets that are intractable by conventional embedding procedures. The method is programmatically simple, robust, and convergent, and can be applied to a wide range of scientific problems involving exploratory data analysis and visualization.",
"We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.",
"Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.",
"",
"Real world networks constructed from raw data are often characterized by complex community structures. Existing dimensionality reduction techniques, however, do not take such characteristics into account. This is especially important for problems with low number of samples where the curse of dimensionality is particularly significant. Therefore, in this paper, we propose FeatureNet, a novel community-based dimensionality reduction framework targeting small sample problems. To this end, we propose a new method to directly construct a network from high-dimensional raw data while explicitly revealing its hidden community structure; these communities are then used to learn low-dimensional features using a representation learning framework. We show the effectiveness of our approach on eight datasets covering application areas as diverse as handwritten digits, biology, physical sciences, NLP, and computational sustainability. Extensive experiments on the above datasets (with sizes mostly between 100 and 1500 samples) demonstrate that FeatureNet significantly outperforms (i.e., up to 40 improvement in classification accuracy) ten well-known dimensionality reduction methods like PCA, Kernel PCA, Isomap, SNE, t-SNE, etc.",
"Many problems in AI are simplified by clever representations of sensory or symbolic input. How to discover such representations automatically, from large amounts of unlabeled data, remains a fundamental challenge. The goal of statistical methods for dimensionality reduction is to detect and discover low dimensional structure in high dimensional data. In this paper, we review a recently proposed algorithm-- maximum, variance unfolding--for learning faithful low dimensional representations of high dimensional data. The algorithm relies on modem tools in convex optimization that are proving increasingly useful in many areas of machine learning."
]
} |
1904.09099 | 2936215687 | In this paper, a new deep learning architecture for stereo disparity estimation is proposed. The proposed atrous multiscale network (AMNet) adopts an efficient feature extractor with depthwise-separable convolutions and an extended cost volume that deploys novel stereo matching costs on the deep features. A stacked atrous multiscale network is proposed to aggregate rich multiscale contextual information from the cost volume which allows for estimating the disparity with high accuracy at multiple scales. AMNet can be further modified to be a foreground-background aware network, FBA-AMNet, which is capable of discriminating between the foreground and the background objects in the scene at multiple scales. An iterative multitask learning method is proposed to train FBA-AMNet end-to-end. The proposed disparity estimation networks, AMNet and FBA-AMNet, show accurate disparity estimates and advance the state of the art on the challenging Middlebury, KITTI 2012, KITTI 2015, and Sceneflow stereo disparity estimation benchmarks. | There has been significant interest to improve the extraction of contextual information using deep neural networks for better image understanding. The earlier methods used multiscale inputs from an image pyramid @cite_17 @cite_16 @cite_25 @cite_23 or implemented probabilistic graphical models @cite_35 @cite_36 . Recently, models with spatial pyramid pooling (SPP) @cite_24 and encoder-decoder structure have shown great improvements in various computer vision tasks. @cite_9 proposed the PSPNet which performs SPP at different grid scales. @cite_41 @cite_27 applied atrous convolutions to the SPP module (ASPP) to process the feature maps using several parallel atrous convolutions with different dilation factors. @cite_39 designed a stacked hourglass module which stacks an encoder-decoder module three times with shortcut connections to aggregate multiscale contextual information. @cite_38 further developed the DeepLab v3+ model that combined the ideas of encoder-decoder architecture and ASPP. | {
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_36",
"@cite_41",
"@cite_9",
"@cite_39",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_16",
"@cite_25",
"@cite_17"
],
"mid": [
"2161236525",
"2787091153",
"",
"2412782625",
"2560023338",
"2307770531",
"2179352600",
"2158865742",
"1546771929",
"",
"2963563573",
"2022508996"
],
"abstract": [
"Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.",
"Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0 and 82.1 without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at this https URL .",
"",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.",
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.",
"Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms average- and max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.",
"The goal of the scene labeling task is to assign a class label to each pixel in an image. To ensure a good visual coherence and a high class accuracy, it is essential for a model to capture long range (pixel) label dependencies in images. In a feed-forward architecture, this can be achieved simply by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach that consists of a recurrent convolutional neural network which allows us to consider a large input context while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation technique nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.",
"",
"Recent advances in semantic image segmentation have mostly been achieved by training deep convolutional neural networks (CNNs). We show how to improve semantic segmentation through the use of contextual information, specifically, we explore 'patch-patch' context between image regions, and 'patch-background' context. For learning from the patch-patch context, we formulate Conditional Random Fields (CRFs) with CNN-based pairwise potential functions to capture semantic correlations between neighboring patches. Efficient piecewise training of the proposed deep structured model is then applied to avoid repeated expensive CRF inference for back propagation. For capturing the patch-background context, we show that a network design with traditional multi-scale image input and sliding pyramid pooling is effective for improving performance. Our experimental results set new state-of-the-art performance on a number of popular semantic segmentation datasets, including NYUDv2, PASCAL VOC 2012, PASCAL-Context, and SIFT-flow. In particular, we achieve an intersection-overunion score of 78:0 on the challenging PASCAL VOC 2012 dataset.",
"Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction."
]
} |
1904.09099 | 2936215687 | In this paper, a new deep learning architecture for stereo disparity estimation is proposed. The proposed atrous multiscale network (AMNet) adopts an efficient feature extractor with depthwise-separable convolutions and an extended cost volume that deploys novel stereo matching costs on the deep features. A stacked atrous multiscale network is proposed to aggregate rich multiscale contextual information from the cost volume which allows for estimating the disparity with high accuracy at multiple scales. AMNet can be further modified to be a foreground-background aware network, FBA-AMNet, which is capable of discriminating between the foreground and the background objects in the scene at multiple scales. An iterative multitask learning method is proposed to train FBA-AMNet end-to-end. The proposed disparity estimation networks, AMNet and FBA-AMNet, show accurate disparity estimates and advance the state of the art on the challenging Middlebury, KITTI 2012, KITTI 2015, and Sceneflow stereo disparity estimation benchmarks. | Disparity estimation based on a stereo image pair is a well known problem in computer vision. CNN based systems have recently become ubiquitous in solving this problem. In the early work, @cite_33 proposed a Siamese network to match pairs of image patches for disparity estimation. The network consists of a set of shared convolutional layers, a feature concatenation layer, and a set of fully connected layers for second stage processing and similarity estimation. @cite_0 developed a faster Siamese network in which cost volume is formed by computing the inner product between the left and the right feature maps and the disparity estimation is forumalated as a multi-label classification. | {
"cite_N": [
"@cite_0",
"@cite_33"
],
"mid": [
"2440384215",
"2963502507"
],
"abstract": [
"In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.",
"We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets."
]
} |
1904.09099 | 2936215687 | In this paper, a new deep learning architecture for stereo disparity estimation is proposed. The proposed atrous multiscale network (AMNet) adopts an efficient feature extractor with depthwise-separable convolutions and an extended cost volume that deploys novel stereo matching costs on the deep features. A stacked atrous multiscale network is proposed to aggregate rich multiscale contextual information from the cost volume which allows for estimating the disparity with high accuracy at multiple scales. AMNet can be further modified to be a foreground-background aware network, FBA-AMNet, which is capable of discriminating between the foreground and the background objects in the scene at multiple scales. An iterative multitask learning method is proposed to train FBA-AMNet end-to-end. The proposed disparity estimation networks, AMNet and FBA-AMNet, show accurate disparity estimates and advance the state of the art on the challenging Middlebury, KITTI 2012, KITTI 2015, and Sceneflow stereo disparity estimation benchmarks. | End-to-end neural networks have also been proposed for stereo disparity estimation. @cite_6 @cite_26 DispNet which consists of a set of convolution layers for feature extraction, a cost volume formed by feature concatenation or patch-wise correlation, an encoder-decoder structure for second stage processing, and a classification layer for disparity estimation. Motivated by the success of deep neural networks, @cite_18 proposed GC-Net. GC-Net uses a deep residual network @cite_28 as the feature extractor, a cost volume formed by disparity-level feature concatenation to incorporates contextual information, a set of @math D convolutions and @math D deconvolutions for second stage processing, and a soft argmin operation for disparity regression. To further explore the importance of contextual information, Chang and Chen @cite_11 proposed the pyramid stereo matching network (PSMNet). Before constructing the cost volume, PSMNet learns the contextual information from the extracted features through a spatial pyramid pooling module. For disparity computation, PSMNet processes the cost volume using a stacked hourglass CNN which constitutes of three hourglass CNNs. Each hourglass CNN has an encoder-decoder architecture, where the encoder and decoder parts of each hourglass network involve downsampling and upsampling of feature maps, respectively. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_28",
"@cite_6",
"@cite_11"
],
"mid": [
"2604231069",
"764651262",
"2949650786",
"2259424905",
"2963619659"
],
"abstract": [
"We propose a novel deep learning architecture for regressing disparity from a rectified pair of stereo images. We leverage knowledge of the problem’s geometry to form a cost volume using deep feature representations. We learn to incorporate contextual information using 3-D convolutions over this volume. Disparity values are regressed from the cost volume using a proposed differentiable soft argmin operation, which allows us to train our method end-to-end to sub-pixel accuracy without any additional post-processing or regularization. We evaluate our method on the Scene Flow and KITTI datasets and on KITTI we set a new stateof-the-art benchmark, while being significantly faster than competing approaches.",
"Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.",
"Recent work has shown that depth estimation from a stereo pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNNs). However, current architectures rely on patch-based Siamese networks, lacking the means to exploit context information for finding correspondence in ill-posed regions. To tackle this problem, we propose PSMNet, a pyramid stereo matching network consisting of two main modules: spatial pyramid pooling and 3D CNN. The spatial pyramid pooling module takes advantage of the capacity of global context information by aggregating context in different scales and locations to form a cost volume. The 3D CNN learns to regularize cost volume using stacked multiple hourglass networks in conjunction with intermediate supervision. The proposed approach was evaluated on several benchmark datasets. Our method ranked first in the KITTI 2012 and 2015 leaderboards before March 18, 2018. The codes of PSMNet are available at: https: github.com JiaRenChang PSMNet."
]
} |
1904.09099 | 2936215687 | In this paper, a new deep learning architecture for stereo disparity estimation is proposed. The proposed atrous multiscale network (AMNet) adopts an efficient feature extractor with depthwise-separable convolutions and an extended cost volume that deploys novel stereo matching costs on the deep features. A stacked atrous multiscale network is proposed to aggregate rich multiscale contextual information from the cost volume which allows for estimating the disparity with high accuracy at multiple scales. AMNet can be further modified to be a foreground-background aware network, FBA-AMNet, which is capable of discriminating between the foreground and the background objects in the scene at multiple scales. An iterative multitask learning method is proposed to train FBA-AMNet end-to-end. The proposed disparity estimation networks, AMNet and FBA-AMNet, show accurate disparity estimates and advance the state of the art on the challenging Middlebury, KITTI 2012, KITTI 2015, and Sceneflow stereo disparity estimation benchmarks. | Fusion of semantic segmentation information with other extracted information can result in better scene understanding, and hence has been shown effective in improving the accuracy of challenging computer vision tasks, such as multiscale pedestrian detection @cite_15 . Consequently, researchers tried to utilize information from low-level vision tasks such as semantic segmentation or edge detection to reinforce the disparity estimation system. @cite_2 introduce the SegStereo model, which suggests that appropriate incorporation of semantic cues can rectify disparity estimation. The SegStereo model embeds semantic features to enhance intermediate features and regularize the loss term. @cite_21 proposed EdgeStereo where edge features are embedded and cooperated by concatenating them to features at different scales of the residual pyramid network, and trained using multiphase training. | {
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_2"
],
"mid": [
"2531915888",
"2963277791",
"2886944874"
],
"abstract": [
"We propose a deep neural network fusion architecture for fast and robust pedestrian detection. The proposed network fusion architecture allows for parallel processing of multiple networks for speed. A single shot deep convolutional network is trained as a object detector to generate all possible pedestrian candidates of different sizes and occlusions. This network outputs a large variety of pedestrian candidates to cover the majority of ground-truth pedestrians while also introducing a large number of false positives. Next, multiple deep neural networks are used in parallel for further refinement of these pedestrian candidates. We introduce a soft-rejection based network fusion method to fuse the soft metrics from all networks together to generate the final confidence scores. Our method performs better than existing state-of-the-arts, especially when detecting small-size and occluded pedestrians. Furthermore, we propose a method for integrating pixel-wise semantic segmentation network into the network fusion architecture as a reinforcement to the pedestrian detector. The approach outperforms state-of-the-art methods on most protocols on Caltech Pedestrian dataset, with significant boosts on several protocols. It is also faster than all other methods.",
"Recent convolutional neural networks, especially end-to-end disparity estimation models, achieve remarkable performance on stereo matching task. However, existed methods, even with the complicated cascade structure, may fail in the regions of non-textures, boundaries and tiny details. Focus on these problems, we propose a multi-task network EdgeStereo that is composed of a backbone disparity network and an edge sub-network. Given a binocular image pair, our model enables end-to-end prediction of both disparity map and edge map. Basically, we design a context pyramid to encode multi-scale context information in disparity branch, followed by a compact residual pyramid for cascaded refinement. To further preserve subtle details, our EdgeStereo model integrates edge cues by feature embedding and edge-aware smoothness loss regularization. Comparative results demonstrates that stereo matching and edge detection can help each other in the unified model. Furthermore, our method achieves state-of-art performance on both KITTI Stereo and Scene Flow benchmarks, which proves the effectiveness of our design.",
"Disparity estimation for binocular stereo images finds a wide range of applications. Traditional algorithms may fail on featureless regions, which could be handled by high-level clues such as semantic segments. In this paper, we suggest that appropriate incorporation of semantic cues can greatly rectify prediction in commonly-used disparity estimation frameworks. Our method conducts semantic feature embedding and regularizes semantic cues as the loss term to improve learning disparity. Our unified model SegStereo employs semantic features from segmentation and introduces semantic softmax loss, which helps improve the prediction accuracy of disparity maps. The semantic cues work well in both unsupervised and supervised manners. SegStereo achieves state-of-the-art results on KITTI Stereo benchmark and produces decent prediction on both CityScapes and FlyingThings3D datasets."
]
} |
1904.09099 | 2936215687 | In this paper, a new deep learning architecture for stereo disparity estimation is proposed. The proposed atrous multiscale network (AMNet) adopts an efficient feature extractor with depthwise-separable convolutions and an extended cost volume that deploys novel stereo matching costs on the deep features. A stacked atrous multiscale network is proposed to aggregate rich multiscale contextual information from the cost volume which allows for estimating the disparity with high accuracy at multiple scales. AMNet can be further modified to be a foreground-background aware network, FBA-AMNet, which is capable of discriminating between the foreground and the background objects in the scene at multiple scales. An iterative multitask learning method is proposed to train FBA-AMNet end-to-end. The proposed disparity estimation networks, AMNet and FBA-AMNet, show accurate disparity estimates and advance the state of the art on the challenging Middlebury, KITTI 2012, KITTI 2015, and Sceneflow stereo disparity estimation benchmarks. | Some works have been dedicated to design disparity refinement networks to improve the depth or disparity estimated from previous state-of-art methods. @cite_13 , designed a coarse-to-fine depth refinement module that improved the accuracy of the depth estimated by a single-image depth estimation network. Recently, a refinement module called a convolutional spatial propagation network (CSPN) was proposed, and was trained to refine the output from existing state-of-art networks for single image depth estimation @cite_13 or stereo disparity estimation @cite_11 , which improved their accuracies @cite_40 . A recent work, DispSegNet @cite_10 concatenated semantic segmentation embeddings with the initial disparity estimates before passing them to the second stage refinement network which improved the disparity estimation in ill-posed regions. | {
"cite_N": [
"@cite_10",
"@cite_40",
"@cite_13",
"@cite_11"
],
"mid": [
"2964223642",
"2885093229",
"2171740948",
"2963619659"
],
"abstract": [
"Recent work has shown that convolutional neural networks (CNNs) can be applied successfully in disparity estimation, but these methods still suffer from errors in regions of low texture, occlusions, and reflections. Concurrently, deep learning for semantic segmentation has shown great progress in recent years. In this letter, we design a CNN architecture that combines these two tasks to improve the quality and accuracy of disparity estimation with the help of semantic segmentation. Specifically, we propose a network structure in which these two tasks are highly coupled. One key novelty of this approach is the two-stage refinement process. Initial disparity estimates are refined with an embedding learned from the semantic segmentation branch of the network. The proposed model is trained using an unsupervised approach, in which images from one half of the stereo pair are warped and compared against images from the other camera. Another key advantage of the proposed approach is that a single network is capable of outputting disparity estimates and semantic labels. These outputs are of great use in autonomous vehicle operation; with real-time constraints being key, such performance improvements increase the viability of driving applications. Experiments on KITTI and Cityscapes datasets show that our model can achieve state-of-the-art results and that leveraging embedding learned from semantic segmentation improves the performance of disparity estimation.",
"Depth estimation from a single image is a fundamental problem in computer vision. In this paper, we propose a simple yet effective convolutional spatial propagation network (CSPN) to learn the affinity matrix for depth prediction. Specifically, we adopt an efficient linear propagation model, where the propagation is performed with a manner of recurrent convolutional operation, and the affinity among neighboring pixels is learned through a deep convolutional neural network (CNN). We apply the designed CSPN to two depth estimation tasks given a single image: (1) Refine the depth output from existing state-of-the-art (SOTA) methods; (2) Convert sparse depth samples to a dense depth map by embedding the depth samples within the propagation procedure. The second task is inspired by the availability of LiDAR that provides sparse but accurate depth measurements. We experimented the proposed CSPN over the popular NYU v2 [1] and KITTI [2] datasets, where we show that our proposed approach improves not only quality (e.g., 30 more reduction in depth error), but also speed (e.g., 2 to 5 ( ) faster) of depth maps than previous SOTA methods. The codes of CSPN are available at: https: github.com XinJCheng CSPN.",
"Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.",
"Recent work has shown that depth estimation from a stereo pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNNs). However, current architectures rely on patch-based Siamese networks, lacking the means to exploit context information for finding correspondence in ill-posed regions. To tackle this problem, we propose PSMNet, a pyramid stereo matching network consisting of two main modules: spatial pyramid pooling and 3D CNN. The spatial pyramid pooling module takes advantage of the capacity of global context information by aggregating context in different scales and locations to form a cost volume. The 3D CNN learns to regularize cost volume using stacked multiple hourglass networks in conjunction with intermediate supervision. The proposed approach was evaluated on several benchmark datasets. Our method ranked first in the KITTI 2012 and 2015 leaderboards before March 18, 2018. The codes of PSMNet are available at: https: github.com JiaRenChang PSMNet."
]
} |
1904.08950 | 2937574574 | Understanding the dynamics of international politics is important yet challenging for civilians. In this work, we explore unsupervised neural models to infer relations between nations from news articles. We extend existing models by incorporating shallow linguistics information and propose a new automatic evaluation metric that aligns relationship dynamics with manually annotated key events. As understanding international relations requires carefully analyzing complex relationships, we conduct in-person human evaluations with three groups of participants. Overall, humans prefer the outputs of our model and give insightful feedback that suggests future directions for human-centered models. Furthermore, our model reveals interesting regional differences in news coverage. For instance, with respect to US-China relations, Singaporean media focus more on "strengthening" and "purchasing", while US media focus more on "criticizing" and "denouncing". | Topic modeling has been an important method to grasp important concepts from a large collection of documents in an unsupervised fashion @cite_3 @cite_11 @cite_2 @cite_0 . Similar to our work, incorporates linguistic insights with topic models to identify event classes and detect conflicts. Our work additionally models the context of relations through nouns and focuses on exploring the potential of neural models. | {
"cite_N": [
"@cite_0",
"@cite_2",
"@cite_3",
"@cite_11"
],
"mid": [
"1995368504",
"2073792299",
"1880262756",
"2250753706"
],
"abstract": [
"We present a Bayesian tensor factorization model for inferring latent group structures from dynamic pairwise interaction patterns. For decades, political scientists have collected and analyzed records of the form \"country i took action a toward country j at time t\" - known as dyadic events - in order to form and test theories of international relations. We represent these event data as a tensor of counts and develop Bayesian Poisson tensor factorization to infer a low-dimensional, interpretable representation of their salient patterns. We demonstrate that our model's predictive performance is better than that of standard non-negative tensor factorization methods. We also provide a comparison of our variational updates to their maximum likelihood counterparts. In doing so, we identify a better way to form point estimates of the latent factors than that typically used in Bayesian Poisson matrix factorization. Finally, we showcase our model as an exploratory analysis tool for political scientists. We show that the inferred latent factor matrices capture interpretable multilateral relations that both conform to and inform our knowledge of international a airs.",
"Network data is ubiquitous, encoding collections of relationships between entities such as people, places, genes, or corporations. While many resources for networks of interesting entities are emerging, most of these can only annotate connections in a limited fashion. Although relationships between entities are rich, it is impractical to manually devise complete characterizations of these relationships for every pair of entities on large, real-world corpora. In this paper we present a novel probabilistic topic model to analyze text corpora and infer descriptions of its entities and of relationships between those entities. We develop variational methods for performing approximate inference on our model and demonstrate that our model can be practically deployed on large corpora such as Wikipedia. We show qualitatively and quantitatively that our model can construct and annotate graphs of relationships and make useful predictions.",
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.",
"Continuous space word embeddings learned from large, unstructured corpora have been shown to be effective at capturing semantic regularities in language. In this paper we replace LDA’s parameterization of “topics” as categorical distributions over opaque word types with multivariate Gaussian distributions on the embedding space. This encourages the model to group words that are a priori known to be semantically related into topics. To perform inference, we introduce a fast collapsed Gibbs sampling algorithm based on Cholesky decompositions of covariance matrices of the posterior predictive distributions. We further derive a scalable algorithm that draws samples from stale posterior predictive distributions and corrects them with a Metropolis–Hastings step. Using vectors learned from a domain-general corpus (English Wikipedia), we report results on two document collections (20-newsgroups and NIPS). Qualitatively, Gaussian LDA infers different (but still very sensible) topics relative to standard LDA. Quantitatively, our technique outperforms existing models at dealing with OOV words in held-out documents."
]
} |
1904.08950 | 2937574574 | Understanding the dynamics of international politics is important yet challenging for civilians. In this work, we explore unsupervised neural models to infer relations between nations from news articles. We extend existing models by incorporating shallow linguistics information and propose a new automatic evaluation metric that aligns relationship dynamics with manually annotated key events. As understanding international relations requires carefully analyzing complex relationships, we conduct in-person human evaluations with three groups of participants. Overall, humans prefer the outputs of our model and give insightful feedback that suggests future directions for human-centered models. Furthermore, our model reveals interesting regional differences in news coverage. For instance, with respect to US-China relations, Singaporean media focus more on "strengthening" and "purchasing", while US media focus more on "criticizing" and "denouncing". | Last but not least, researchers have studied the dynamics of media coverage from a wide range of perspectives, ranging from framing @cite_25 @cite_6 , to relationship between ideas @cite_24 , to quotes of politicians @cite_4 @cite_15 @cite_28 . There is also significant effort for building event databases in political science @cite_12 , and assisting journalists with tools @cite_35 , and dating historical text @cite_23 . | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_28",
"@cite_6",
"@cite_24",
"@cite_23",
"@cite_15",
"@cite_25",
"@cite_12"
],
"mid": [
"2742912268",
"2166273866",
"2127492100",
"2889078912",
"2963225538",
"2250412782",
"2788268216",
"2251617662",
""
],
"abstract": [
"News archives are an invaluable primary source for placing current events in historical context. But current search engine tools do a poor job at uncovering broad themes and narratives across documents. We present Rookie: a practical software system which uses natural language processing (NLP) to help readers, reporters and editors uncover broad stories in news archives. Unlike prior work, Rookie's design emerged from 18 months of iterative development in consultation with editors and computational journalists. This process lead to a dramatically different approach from previous academic systems with similar goals. Our efforts offer a generalizable case study for others building real-world journalism software using NLP.",
"Given the extremely large pool of events and stories available, media outlets need to focus on a subset of issues and aspects to convey to their audience. Outlets are often accused of exhibiting a systematic bias in this selection process, with different outlets portraying different versions of reality. However, in the absence of objective measures and empirical evidence, the direction and extent of systematicity remains widely disputed. In this paper we propose a framework based on quoting patterns for quantifying and characterizing the degree to which media outlets exhibit systematic bias. We apply this framework to a massive dataset of news articles spanning the six years of Obama's presidency and all of his speeches, and reveal that a systematic pattern does indeed emerge from the outlet's quoting behavior. Moreover, we show that this pattern can be successfully exploited in an unsupervised prediction setting, to determine which new quotes an outlet will select to broadcast. By encoding bias patterns in a low-rank space we provide an analysis of the structure of political media coverage. This reveals a latent media bias space that aligns surprisingly well with political ideology and outlet type. A linguistic analysis exposes striking differences across these latent dimensions, showing how the different types of media outlets portray different realities even when reporting on the same events. For example, outlets mapped to the mainstream conservative side of the latent space focus on quotes that portray a presidential persona disproportionately characterized by negativity.",
"Tracking new topics, ideas, and \"memes\" across the Web has been an issue of considerable interest. Recent work has developed methods for tracking topic shifts over long time scales, as well as abrupt spikes in the appearance of particular named entities. However, these approaches are less well suited to the identification of content that spreads widely and then fades over time scales on the order of days - the time scale at which we perceive news and events. We develop a framework for tracking short, distinctive phrases that travel relatively intact through on-line text; developing scalable algorithms for clustering textual variants of such phrases, we identify a broad class of memes that exhibit wide spread and rich variation on a daily basis. As our principal domain of study, we show how such a meme-tracking approach can provide a coherent representation of the news cycle - the daily rhythms in the news media that have long been the subject of qualitative interpretation but have never been captured accurately enough to permit actual quantitative analysis. We tracked 1.6 million mainstream media sites and blogs over a period of three months with the total of 90 million articles and we find a set of novel and persistent temporal patterns in the news cycle. In particular, we observe a typical lag of 2.5 hours between the peaks of attention to a phrase in the news media and in blogs respectively, with divergent behavior around the overall peak and a \"heartbeat\"-like pattern in the handoff between news and blogs. We also develop and analyze a mathematical model for the kinds of temporal variation that the system exhibits.",
"",
"",
"This paper presents a novel approach to the task of temporal text classification combining text ranking and probability for the automatic dating of historical texts. The method was applied to three historical corpora: an English, a Portuguese and a Romanian corpus. It obtained performance ranging from 83 to 93 accuracy, using a fully automated approach with very basic features.",
"Political speeches and debates play an important role in shaping the images of politicians, and the public often relies on media outlets to select bits of political communication from a large pool of utterances. It is an important research question to understand what factors impact this selection process. To quantitatively explore the selection process, we build a three-decade dataset of presidential debate transcripts and post-debate coverage. We first examine the effect of wording and propose a binary classification framework that controls for both the speaker and the debate situations. We find that crowdworkers can only achieve an accuracy of 60 in this task, indicating that media choices are not entirely obvious. Our classifiers outperform crowdworkers on average, mainly in primary debates. We also compare important factors from crowdworkers’ free responses with those from data-driven methods and find interesting differences. Few crowdworkers mentioned that “context matters”, whereas our data show that well-quoted sentences are more distinct from the previous utterance by the same speaker than less-quoted sentences. Finally, we examine the aggregate effect of media preferences towards different wordings to understand the extent of fragmentation among media outlets. By analyzing a bipartite graph built from quoting behavior in our data, we observe a decreasing trend in bipartisan coverage.",
"We describe the first version of the Media Frames Corpus: several thousand news articles on three policy issues, annotated in terms of media framing. We motivate framing as a phenomenon of study for computational linguistics and describe our annotation process.",
""
]
} |
1904.09081 | 2940314284 | Meta learning is a promising solution to few-shot learning problems. However, existing meta learning methods are restricted to the scenarios where training and application tasks share the same out-put structure. To obtain a meta model applicable to the tasks with new structures, it is required to collect new training data and repeat the time-consuming meta training procedure. This makes them inefficient or even inapplicable in learning to solve heterogeneous few-shot learning tasks. We thus develop a novel and principled HierarchicalMeta Learning (HML) method. Different from existing methods that only focus on optimizing the adaptability of a meta model to similar tasks, HML also explicitly optimizes its generalizability across heterogeneous tasks. To this end, HML first factorizes a set of similar training tasks into heterogeneous ones and trains the meta model over them at two levels to maximize adaptation and generalization performance respectively. The resultant model can then directly generalize to new tasks. Extensive experiments on few-shot classification and regression problems clearly demonstrate the superiority of HML over fine-tuning and state-of-the-art meta learning approaches in terms of generalization across heterogeneous tasks. | Recently, meta learning has drawn increasing attention, with which automatic learning schemes are devised to improve learning efficiency of existing learning methods or to learn (induce) the algorithms directly @cite_15 @cite_7 . For example, @cite_5 targets at learning how to initialize a model such that it can adapt to different tasks quickly through simple gradient descent fine-tuning. @cite_19 @cite_21 learn to match the query sample with the support ones based on metric learning in the embedded space. Analogously, @cite_18 tries to learn to initialize distribution parameters. @cite_16 uses deep meta learning to learn the conception matching of categories in the conception space. In our paper, we aim to obtain a meta model capable of generalizing across heterogeneous tasks. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_21",
"@cite_19",
"@cite_5",
"@cite_15",
"@cite_16"
],
"mid": [
"2884901161",
"2091118421",
"2601450892",
"2963341924",
"2604763608",
"1519451279",
"2786928087"
],
"abstract": [
"Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space. The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks. Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.",
"Met alearning attracted considerable interest in the machine learning community in the last years. Yet, some disagreement remains on what does or what does not constitute a met alearning problem and in which contexts the term is used in. This survey aims at giving an all-encompassing overview of the research directions pursued under the umbrella of met alearning, reconciling different definitions given in scientific literature, listing the choices involved when designing a met alearning system and identifying some of the future research challenges in this domain.",
"A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on a complicated fine-tuning procedure and an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space in which few-shot classification can be performed by computing Euclidean distances to prototype representations of each class, rather than individual points. Our method is competitive with state-of-the-art one-shot classification approaches while being much simpler and more scalable with the size of the support set. We empirically demonstrate the performance of our approach on the Omniglot and mini-ImageNet datasets. We further demonstrate that a similar idea can be used for zero-shot learning, where each class is described by a set of attributes, and achieve state-of-the-art results on the Caltech UCSD bird dataset.",
"Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.",
"We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.",
"",
"Few-shot learning remains challenging for meta-learning that learns a learning algorithm (meta-learner) from many related tasks. In this work, we argue that this is due to the lack of a good representation for meta-learning, and propose deep meta-learning to integrate the representation power of deep learning into meta-learning. The framework is composed of three modules, a concept generator, a meta-learner, and a concept discriminator, which are learned jointly. The concept generator, e.g. a deep residual net, extracts a representation for each instance that captures its high-level concept, on which the meta-learner performs few-shot learning, and the concept discriminator recognizes the concepts. By learning to learn in the concept space rather than in the complicated instance space, deep meta-learning can substantially improve vanilla meta-learning, which is demonstrated on various few-shot image recognition problems. For example, on 5-way-1-shot image recognition on CIFAR-100 and CUB-200, it improves Matching Nets from 50.53 and 56.53 to 58.18 and 63.47 , improves MAML from 49.28 and 50.45 to 56.65 and 64.63 , and improves Meta-SGD from 53.83 and 53.34 to 61.62 and 66.95 , respectively."
]
} |
1904.09140 | 2936947847 | Recognizing human actions is a core challenge for autonomous systems as they directly share the same space with humans. Systems must be able to recognize and assess human actions in real-time. In order to train corresponding data-driven algorithms, a significant amount of annotated training data is required. We demonstrated a pipeline to detect humans, estimate their pose, track them over time and recognize their actions in real-time with standard monocular camera sensors. For action recognition, we encode the human pose into a new data format called Encoded Human Pose Image (EHPI) that can then be classified using standard methods from the computer vision community. With this simple procedure we achieve competitive state-of-the-art performance in pose-based action detection and can ensure real-time performance. In addition, we show a use case in the context of autonomous driving to demonstrate how such a system can be trained to recognize human actions using simulation data. | There are various directions of research in the area of human action recognition. Some approaches are based on Convolutional Neural Networks (CNNs). They usually follow a multistream approach @cite_14 @cite_0 @cite_9 , which uses an RGB image for visual feature extraction as well as a representation of the temporal flow, usually in the form of optical flow. There is also work which make use of human poses, either using pose directly @cite_14 @cite_9 or apply some attention like mechanism to get visual features from important areas around the human skeleton @cite_8 @cite_21 . Those approaches often rely on recurrent neural networks @cite_28 @cite_21 @cite_20 . Other approaches rely on handcrafted features extracted from human pose @cite_16 @cite_6 . Most similar to our work is the work of @cite_19 . They encoded time information in human body joint proposal heatmaps with color and use this stacked, colored joint heatmaps as an input for a CNN to classify the action. To reach state-of-the-art performance they combined this approach with another multistream approach @cite_11 . Most of these approaches are relatively complex and therefore do not meet the real-time requirements of autonomous systems. Our approach is much simpler and still delivers competitive performance. | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"2914076145",
"2791307013",
"2898274310",
"",
"",
"",
"",
"2796633859",
"2758850938",
"",
"2963524571"
],
"abstract": [
"To take advantage of recent advances in human pose estimation from images, we develop a deep neural network model for action recognition from videos by computing temporal human pose features with a 3D CNN model. The proposed temporal pose features can provide more discriminative human action information than previous video features, such as appearance and short-term motion. In addition, we propose a novel fusion network that combines temporal pose, spatial and motion feature maps for the classification by bridging the gap between the dimension difference between 3D and 2D CNN feature maps. We show that the proposed action recognition system provides superior accuracy compared to the previous methods through experiments on Sub-JHMDB and PennAction datasets.",
"Abstract The most successful video-based human action recognition methods rely on feature representations extracted using Convolutional Neural Networks (CNNs). Inspired by the two-stream network (TS-Net), we propose a multi-stream Convolutional Neural Network (CNN) architecture to recognize human actions. We additionally consider human-related regions that contain the most informative features. First, by improving foreground detection, the region of interest corresponding to the appearance and the motion of an actor can be detected robustly under realistic circumstances. Based on the entire detected human body, we construct one appearance and one motion stream. In addition, we select a secondary region that contains the major moving part of an actor based on motion saliency. By combining the traditional streams with the novel human-related streams, we introduce a human-related multi-stream CNN (HR-MSCNN) architecture that encodes appearance, motion, and the captured tubes of the human-related regions. Comparative evaluation on the JHMDB, HMDB51, UCF Sports and UCF101 datasets demonstrates that the streams contain features that complement each other. The proposed multi-stream architecture achieves state-of-the-art results on these four datasets.",
"We present ActionXPose, a novel 2D pose-based algorithm for posture-level Human Action Recognition (HAR). The proposed approach exploits 2D human poses provided by OpenPose detector from RGB videos. ActionXPose aims to process poses data to be provided to a Long Short-Term Memory Neural Network and to a 1D Convolutional Neural Network, which solve the classification problem. ActionXPose is one of the first algorithms that exploits 2D human poses for HAR. The algorithm has real-time performance and it is robust to camera movings, subject proximity changes, viewpoint changes, subject appearance changes and provide high generalization degree. In fact, extensive simulations show that ActionXPose can be successfully trained using different datasets at once. State-of-the-art performance on popular datasets for posture-related HAR problems (i3DPost, KTH) are provided and results are compared with those obtained by other methods, including the selected ActionXPose baseline. Moreover, we also proposed two novel datasets called MPOSE and ISLD recorded in our Intelligent Sensing Lab, to show ActionXPose generalization performance.",
"",
"",
"",
"",
"Most state-of-the-art methods for action recognition rely on a two-stream architecture that processes appearance and motion independently. In this paper, we claim that considering them jointly offers rich information for action recognition. We introduce a novel representation that gracefully encodes the movement of some semantic keypoints. We use the human joints as these keypoints and term our Pose moTion representation PoTion. Specifically, we first run a state-of-the-art human pose estimator [4] and extract heatmaps for the human joints in each frame. We obtain our PoTion representation by temporally aggregating these probability maps. This is achieved by 'colorizing' each of them depending on the relative time of the frames in the video clip and summing them. This fixed-size representation for an entire video clip is suitable to classify actions using a shallow convolutional neural network. Our experimental evaluation shows that PoTion outperforms other state-of-the-art pose representations [6, 48]. Furthermore, it is complementary to standard appearance and motion streams. When combining PoTion with the recent two-stream I3D approach [5], we obtain state-of-the-art performance on the JHMDB, HMDB and UCF101 datasets.",
"Avoiding vehicle-to-pedestrian crashes is a critical requirement for nowadays advanced driver assistant systems (ADAS) and future self-driving vehicles. Accordingly, detecting pedestrians from raw sensor data has a history of more than 15 years of research, with vision playing a central role. During the last years, deep learning has boosted the accuracy of image-based pedestrian detectors. However, detection is just the first step towards answering the core question, namely is the vehicle going to crash with a pedestrian provided preventive actions are not taken? Therefore, knowing as soon as possible if a detected pedestrian has the intention of crossing the road ahead of the vehicle is essential for performing safe and comfortable maneuvers that prevent a crash. However, compared to pedestrian detection, there is relatively little literature on detecting pedestrian intentions. This paper aims to contribute along this line by presenting a new vision-based approach which analyzes the pose of a pedestrian along several frames to determine if he or she is going to enter the road or not. We present experiments showing 750 ms of anticipation for pedestrians crossing the road, which at a typical urban driving speed of 50 km h can provide 15 additional meters (compared to a pure pedestrian detector) for vehicle automatic reactions or to warn the driver. Moreover, in contrast with state-of-the-art methods, our approach is monocular, neither requiring stereo nor optical flow information.",
"",
"The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.2 on HMDB-51 and 97.9 on UCF-101."
]
} |
1904.09140 | 2936947847 | Recognizing human actions is a core challenge for autonomous systems as they directly share the same space with humans. Systems must be able to recognize and assess human actions in real-time. In order to train corresponding data-driven algorithms, a significant amount of annotated training data is required. We demonstrated a pipeline to detect humans, estimate their pose, track them over time and recognize their actions in real-time with standard monocular camera sensors. For action recognition, we encode the human pose into a new data format called Encoded Human Pose Image (EHPI) that can then be classified using standard methods from the computer vision community. With this simple procedure we achieve competitive state-of-the-art performance in pose-based action detection and can ensure real-time performance. In addition, we show a use case in the context of autonomous driving to demonstrate how such a system can be trained to recognize human actions using simulation data. | A drawback of simulated training data is the transfer of algorithms trained on simulated data to the application on real world data. There is usually a domain shift, which should be minimized by domain adaptation algorithms. Such approaches include the common use of few real data and many simulated data @cite_29 , methods which use decorrelated features @cite_26 and more advanced domain confusion algorithms @cite_27 . As an open field of research it is important to find alternatives to avoid the domain transfer problem. We view the abstraction of input data as a promising approach to apply algorithms trained on simulated data directly to real data. | {
"cite_N": [
"@cite_27",
"@cite_29",
"@cite_26"
],
"mid": [
"2593768305",
"2033547469",
"2083544878"
],
"abstract": [
"Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.",
"Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in real-world images? Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the data set shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector.",
"The most successful 2D object detection methods require a large number of images annotated with object bounding boxes to be collected for training. We present an alternative approach that trains on virtual data rendered from 3D models, avoiding the need for manual labeling. Growing demand for virtual reality applications is quickly bringing about an abundance of available 3D models for a large variety of object categories. While mainstream use of 3D models in vision has focused on predicting the 3D pose of objects, we investigate the use of such freely available 3D models for multicategory 2D object detection. To address the issue of dataset bias that arises from training on virtual data and testing on real images, we propose a simple and fast adaptation approach based on decorrelated features. We also compare two kinds of virtual data, one rendered with real-image textures and one without. Evaluation on a benchmark domain adaptation dataset demonstrates that our method performs comparably to existing methods trained on large-scale real image domains."
]
} |
1904.08983 | 2939354923 | We present a fully convolutional wav-to-wav network for converting between speakers' voices, without relying on text. Our network is based on an encoder-decoder architecture, where the encoder is pre-trained for the task of Automatic Speech Recognition (ASR), and a multi-speaker waveform decoder is trained to reconstruct the original signal in an autoregressive manner. We train the network on narrated audiobooks, and demonstrate the ability to perform multi-voice TTS in those voices, by converting the voice of a TTS robot. We observe no degradation in the quality of the generated voices, in comparison to the reference TTS voice. The modularity of our approach, which separates the target voice generation from the TTS module, enables client-side personalized TTS in a privacy-aware manner. | In this work, we rely on a trained ASR network in order to obtain features. The ASR features are supposedly mostly orthogonal to the identity. An alternative approach would be to use the speaker identification features, in order to represent the source and target voices and perform the transformation between the two. I-vectors are a GMM-based representation, which is often used in speaker identification or verification. @cite_10 have aligned the source and target GMMs, by comparing the i-vectors of the two speakers, without using transcription or parallel data. Unlike our method, the reference speaker is known at training time, and their method employs an MFCC vocoder, which limits the output's quality, in comparison to WaveNets. Speaker verification features were also used to embed speakers by @cite_7 . In this case, the speaker embedding is based on a neural classifier, and is used within a tacotron2 @cite_19 TTS framework, which employs a WaveNet decoder. | {
"cite_N": [
"@cite_19",
"@cite_10",
"@cite_7"
],
"mid": [
"2964243274",
"2651834199",
"2963799213"
],
"abstract": [
"This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize time-domain waveforms from those spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. To validate our design choices, we present ablation studies of key components of our system and evaluate the impact of using mel spectrograms as the conditioning input to WaveNet instead of linguistic, duration, and @math features. We further show that using this compact acoustic intermediate representation allows for a significant reduction in the size of the WaveNet architecture.",
"Text-independent speaker verification (recognizing speakers regardless of content) and non-parallel voice conversion (transforming voice identities without requiring content-matched training utterances) are related problems. We adopt i-vector method to voice conversion. An i-vector is a fixed-dimensional representation of a speech utterance that enables treating voice conversion in utterance domain, as opposed to frame domain. The high dimensionality (800) and small number of training utterances (24) necessitates using prior information of speakers. We adopt probabilistic linear discriminant analysis (PLDA) for voice conversion. The proposed approach requires neither parallel utterances, transcriptions nor time alignment procedures at any stage.",
"Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of posterior collapse'' -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations."
]
} |
1904.08962 | 2937610357 | This paper studies a class of constrained restless multi-armed bandits. The constraints are in the form of time varying availability of arms. This variation can be either stochastic or semi-deterministic. A fixed number of arms can be chosen to be played in each decision interval. The play of each arm yields a state dependent reward. The current states of arms are partially observable through binary feedback signals from arms that are played. The current availability of arms is fully observable. The objective is to maximize long term cumulative reward. The uncertainty about future availability of arms along with partial state information makes this objective challenging. This optimization problem is analyzed using Whittle's index policy. To this end, a constrained restless single-armed bandit is studied. It is shown to admit a threshold-type optimal policy, and is also indexable. An algorithm to compute Whittle's index is presented. Further, upper bounds on the value function are derived in order to estimate the degree of sub-optimality of various solutions. The simulation study compares the performance of Whittle's index, modified Whittle's index and myopic policies. | In classical restless bandit literature, current states of all the arms are observable in every time slot @cite_15 @cite_26 @cite_20 . Later, this assumption was relaxed and restless bandit models with partially observable states were studied, where states are observable only for those arms that are played @cite_11 @cite_27 . Recent work on restless bandits further generalized this model to the case where states of all arms are partially observable. This is referred to as the @cite_19 @cite_4 . In @cite_17 , further generalization is considered where multiple state transitions are allowed in a single decision interval. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_17",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2044502527",
"2911765153",
"2626694546",
"2952389041",
"2153107031",
"",
"2141515329"
],
"abstract": [
"",
"We investigate the optimal allocation of effort to a collection of n projects. The projects are 'restless' in that the state of a project evolves in time, whether or not it is allocated effort. The evolution of the state of each project follows a Markov rule, but transitions and rewards depend on whether or not the project receives effort. The objective is to maximize the expected time-average reward under a constraint that exactly m of the n projects receive effort at any one time. We show that as m and n tend to oo with m n fixed, the per-project reward of the optimal policy is asymptotically the same as that achieved by a policy which operates under the relaxed constraint that an average of m projects be active. The relaxed constraint was considered by Whittle (1988) who described how to use a Lagrangian multiplier approach to assign indices to the projects. He conjectured that the policy of allocating effort to the m projects of greatest index is asymptotically optimal as m and n tend to oo. We show that the conjecture is true if the differential equation describing the fluid approximation to the index policy has a globally stable equilibrium point. This need not be the case, and we present an example for which the index policy is not asymptotically optimal. However, numerical work suggests that such counterexamples are extremely rare and that the size of the suboptimality",
"This paper studies a generalized class of restless multi-armed bandits with hidden states and allow cumulative feedback, as opposed to the conventional instantaneous feedback. We call them lazy restless bandits (LRBs) as the events of decision making are sparser than the events of state transition. Hence, feedback after each decision event is the cumulative effect of the following state transition events. The states of arms are hidden from the decision maker and rewards for actions are state dependent. The decision maker needs to choose one arm in each decision interval, such that the long-term cumulative reward is maximized. As the states are hidden, the decision maker maintains and updates its belief about them. It is shown that LRBs admit an optimal policy which has threshold structure in belief space. The Whittle-index policy for solving the LRB problem is analyzed; indexability of LRBs is shown. Further, the closed-form index expressions are provided for two sets of special cases; for more general cases, an algorithm for index computation is provided. An extensive simulation study is presented; Whittle-index, modified Whittle-index, and myopic policies are compared. The Lagrangian relaxation of the problem provides an upper bound on the optimal value function; it is used to assess the degree of sub-optimality various policies.",
"We consider the problem of dynamically scheduling @math out of @math binary Markov chains when only noisy observations of state are available, with ergodic (equivalently, long run average) reward. By passing on to the equivalent problem of controlling the conditional distribution of state given observations and controls, it is cast as a restless bandit problem and its Whittle indexability is established.",
"We consider the scheduling problem in downlink wireless networks with heterogeneous, Markov-modulated, ON OFF channels. It is well-known that the performance of scheduling over fading channels heavily depends on the accuracy of the available Channel State Information (CSI), which is costly to acquire. Thus, we consider the CSI acquisition via a practical ARQ-based feedback mechanism whereby channel states are revealed at the end of only scheduled users' transmissions. In the assumed presence of temporally-correlated channel evolutions, the desired scheduler must optimally balance the exploitation-exploration trade-off, whereby it schedules transmissions both to exploit those channels with up-to-date CSI and to explore the current state of those with outdated CSI. In earlier works, Whittle's Index Policy had been suggested as a low-complexity and high-performance solution to this problem. However, analyzing its performance in the typical scenario of statistically heterogeneous channel state processes has remained elusive and challenging, mainly because of the highly-coupled and complex dynamics it possesses. In this work, we overcome these difficulties to rigorously establish the asymptotic optimality properties of Whittle's Index Policy in the limiting regime of many users. More specifically: (1) we prove the local optimality of Whittle's Index Policy, provided that the initial state of the system is within a certain neighborhood of a carefully selected state; (2) we then establish the global optimality of Whittle's Index Policy under a recurrence assumption that is verified numerically for the problem at hand. These results establish, for the first time to the best of our knowledge, that Whittle's Index Policy possesses analytically provable optimality characteristics for scheduling over heterogeneous and temporally-correlated channels.",
"We show that if performance measures in a stochastic scheduling problem satisfy a set of so-called partial conservation laws (PCL), which extend previously studied generalized conservation laws (GCL), then the problem is solved optimally by a priority-index policy for an appropriate range of linear performance objectives, where the optimal indices are computed by a one-pass adaptive-greedy algorithm, based on Klimov's. We further apply this framework to investigate the indexability property of restless bandits introduced by Whittle, obtaining the following results: (1) we identify a class of restless bandits (PCL-indexable) which are indexable; membership in this class is tested through a single run of the adaptive-greedy algorithm, which also computes the Whittle indices when the test is positive; this provides a tractable sufficient condition for indexability; (2) we further indentify the class of GCL-indexable bandits, which includes classical bandits, having the property that they are indexable under any linear reward objective. The analysis is based on the so-called achievable region method, as the results follow from new linear programming formulations for the problems investigated.",
"",
"In this paper, we consider a class of restless multiarmed bandit processes (RMABs) that arises in dynamic multichannel access, user server scheduling, and optimal activation in multiagent systems. For this class of RMABs, we establish the indexability and obtain Whittle index in closed form for both discounted and average reward criteria. These results lead to a direct implementation of Whittle index policy with remarkably low complexity. When arms are stochastically identical, we show that Whittle index policy is optimal under certain conditions. Furthermore, it has a semiuniversal structure that obviates the need to know the Markov transition probabilities. The optimality and the semiuniversal structure result from the equivalence between Whittle index policy and the myopic policy established in this work. For nonidentical arms, we develop efficient algorithms for computing a performance upper bound given by Lagrangian relaxation. The tightness of the upper bound and the near-optimal performance of Whittle index policy are illustrated with simulation examples."
]
} |
1904.08962 | 2937610357 | This paper studies a class of constrained restless multi-armed bandits. The constraints are in the form of time varying availability of arms. This variation can be either stochastic or semi-deterministic. A fixed number of arms can be chosen to be played in each decision interval. The play of each arm yields a state dependent reward. The current states of arms are partially observable through binary feedback signals from arms that are played. The current availability of arms is fully observable. The objective is to maximize long term cumulative reward. The uncertainty about future availability of arms along with partial state information makes this objective challenging. This optimization problem is analyzed using Whittle's index policy. To this end, a constrained restless single-armed bandit is studied. It is shown to admit a threshold-type optimal policy, and is also indexable. An algorithm to compute Whittle's index is presented. Further, upper bounds on the value function are derived in order to estimate the degree of sub-optimality of various solutions. The simulation study compares the performance of Whittle's index, modified Whittle's index and myopic policies. | Earlier, a variant of the restless multi-armed bandit with availability constraints was proposed in @cite_7 . It was applied to the machine repair problem where machine availability is time varying. This model was further generalized in @cite_25 by considering partially observable states. In @cite_25 , the authors consider a penalty for playing an unavailable arm. That is, arms can be played both when they are available and unavailable. Whittle's index policy and myopic policy are analyzed. There are several subtle, but important differences between the current model and the model in @cite_25 . The CRMABs considered in this paper do not allow the play of unavailable arms. Our proposed model differentiates between the actions don't play'' and can't play''. That is, the belief update rules are different for the case where an arm is available and is not played and the case where the arm is unavailable and cannot be played. In this work, we provide an upper bound on optimal value function which can be used as a reference to measure sub-optimality gap of the Whittle's index policy. | {
"cite_N": [
"@cite_25",
"@cite_7"
],
"mid": [
"2897969108",
"2170585566"
],
"abstract": [
"The problem of rested and restless multi-armed bandits with constrained availability (RMAB-CA) of arms is considered. The states of arms evolve in Markovian manner and the exact states are hidden from the decision maker. First, some structural results on value functions are claimed. Following these results, the optimal policy turns out to be a threshold policy . Furthermore, indexability is established for both rested and restless RMAB-CAs. An index formula is derived for the rested model, while an algorithm is provided for restless case.",
"A multiarmed bandit problem is studied when the arms are not always available. The arms are first assumed to be intermittently available with some state action-dependent probabilities. It is proven that no index policy can attain the maximum expected total discounted reward in every instance of that problem. The Whittle index policy is derived, and its properties are studied. Then it is assumed that the arms may break down, but repair is an option at some cost, and the new Whittle index policy is derived. Both problems are indexable. The proposed index policies cannot be dominated by any other index policy over all multiarmed bandit problems considered here. Whittle indices are evaluated for Bernoulli arms with unknown success probabilities."
]
} |
1904.09059 | 2939440064 | Haze degrades content and obscures information of images, which can negatively impact vision-based decision-making in real-time systems. In this paper, we propose an efficient fully convolutional neural network (CNN) image dehazing method designed to run on edge graphical processing units (GPUs). We utilize three variants of our architecture to explore the dependency of dehazed image quality on parameter count and model design. The first two variants presented, a small and big version, make use of a single efficient encoder-decoder convolutional feature extractor. The final variant utilizes a pair of encoder-decoders for atmospheric light and transmission map estimation. Each variant ends with an image refinement pyramid pooling network to form the final dehazed image. For the big variant of the single-encoder network, we demonstrate state-of-the-art performance on the NYU Depth dataset. For the small variant, we maintain competitive performance on the super-resolution O I-HAZE datasets without the need for image cropping. Finally, we examine some challenges presented by the Dense-Haze dataset when leveraging CNN architectures for dehazing of dense haze imagery and examine the impact of loss function selection on image quality. Benchmarks are included to show the feasibility of introducing this approach into real-time systems. | Although there has and continues to be a tremendous amount of success in single image dehazing without the use of neural networks, many recent state-of-the-art techniques utilize deep learning frameworks @cite_15 @cite_20 @cite_1 @cite_16 . These approaches generally incorporate neural network building blocks originally proposed for image segmentation, style transfer, object detection, and other computer vision tasks. For example, U-Nets @cite_24 , feature pyramid networks @cite_12 , and residual networks @cite_5 were all utilized as part of the 2018 NTIRE Image Dehazing Challenge @cite_11 . | {
"cite_N": [
"@cite_1",
"@cite_24",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_20",
"@cite_11"
],
"mid": [
"2779176852",
"2952232639",
"2194775991",
"2256362396",
"2791550762",
"2949533892",
"",
"2899026108"
],
"abstract": [
"This paper proposes an image dehazing model built with a convolutional neural network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed based on a re-formulated atmospheric scattering model. Instead of estimating the transmission matrix and the atmospheric light separately as most previous models did, AOD-Net directly generates the clean image through a light-weight CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other deep models, e.g., Faster R-CNN, for improving high-level tasks on hazy images. Experimental results on both synthesized and natural hazy image datasets demonstrate our superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality. Furthermore, when concatenating AOD-Net with Faster R-CNN, we witness a large improvement of the object detection performance on hazy images.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use.",
"We propose a new end-to-end single image dehazing method, called Densely Connected Pyramid Dehazing Network (DCPDN), which can jointly learn the transmission map, atmospheric light and dehazing all together. The end-to-end learning is achieved by directly embedding the atmospheric scattering model into the network, thereby ensuring that the proposed method strictly follows the physics-driven scattering model for dehazing. Inspired by the dense network that can maximize the information flow along features from different levels, we propose a new edge-preserving densely connected encoder-decoder structure with multi-level pyramid pooling module for estimating the transmission map. This network is optimized using a newly introduced edge-preserving loss function. To further incorporate the mutual structural information between the estimated transmission map and the dehazed result, we propose a joint-discriminator based on generative adversarial network framework to decide whether the corresponding dehazed image and the estimated transmission map are real or fake. An ablation study is conducted to demonstrate the effectiveness of each module evaluated at both estimated transmission map and dehazed result. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods. Code will be made available at: this https URL",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"",
"This paper reviews the first challenge on image dehazing (restoration of rich details in hazy image) with focus on proposed solutions and results. The challenge had 2 tracks. Track 1 employed the indoor images (using I-HAZE dataset), while Track 2 outdoor images (using O-HAZE dataset). The hazy images have been captured in presence of real haze, generated by professional haze machines. I-HAZE dataset contains 35 scenes that correspond to indoor domestic environments, with objects with different colors and specularities. O-HAZE contains 45 different outdoor scenes depicting the same visual content recorded in haze-free and hazy conditions, under the same illumination parameters. The dehazing process was learnable through provided pairs of haze-free and hazy train images. Each track had 120 registered participants and 21 teams competed in the final testing phase. They gauge the state-of-the-art in image dehazing."
]
} |
1904.09059 | 2939440064 | Haze degrades content and obscures information of images, which can negatively impact vision-based decision-making in real-time systems. In this paper, we propose an efficient fully convolutional neural network (CNN) image dehazing method designed to run on edge graphical processing units (GPUs). We utilize three variants of our architecture to explore the dependency of dehazed image quality on parameter count and model design. The first two variants presented, a small and big version, make use of a single efficient encoder-decoder convolutional feature extractor. The final variant utilizes a pair of encoder-decoders for atmospheric light and transmission map estimation. Each variant ends with an image refinement pyramid pooling network to form the final dehazed image. For the big variant of the single-encoder network, we demonstrate state-of-the-art performance on the NYU Depth dataset. For the small variant, we maintain competitive performance on the super-resolution O I-HAZE datasets without the need for image cropping. Finally, we examine some challenges presented by the Dense-Haze dataset when leveraging CNN architectures for dehazing of dense haze imagery and examine the impact of loss function selection on image quality. Benchmarks are included to show the feasibility of introducing this approach into real-time systems. | Several successful techniques leverage hand-engineered features to estimate the transmission map for image dehazing @cite_19 @cite_33 @cite_28 . In contrast to these approaches, Cai al @cite_15 proposed an end-to-end network that learns features useful for estimating a transmission map. However, this method and similar transmission estimation methods @cite_21 do not address estimating the atmospheric light within a scene. Zhang and Patel @cite_16 addressed this issue by estimating both the atmospheric light and transmission map within a generative adversarial learning framework. In this approach, the unknown variables from the atmospheric scattering model are modeled using independent neural network architectures; U-Net is used to learn atmospheric light and a densely connected network is used to learn a transmission map estimation. Additionally, Li al @cite_1 showed that the atmospheric scattering model, described in Equation , could be reformulated via a linear transform to a single variable and bias. This formulation fits naturally within a deep learning framework and hints at the effectiveness of purely convolutional approaches. | {
"cite_N": [
"@cite_33",
"@cite_28",
"@cite_21",
"@cite_1",
"@cite_19",
"@cite_15",
"@cite_16"
],
"mid": [
"2128254161",
"2114867966",
"2519481857",
"2779176852",
"2028763589",
"2256362396",
"2791550762"
],
"abstract": [
"In this paper, we propose a simple but effective image prior-dark channel prior to remove haze from a single input image. The dark channel prior is a kind of statistics of outdoor haze-free images. It is based on a key observation-most local patches in outdoor haze-free images contain some pixels whose intensity is very low in at least one color channel. Using this prior with the haze imaging model, we can directly estimate the thickness of the haze and recover a high-quality haze-free image. Results on a variety of hazy images demonstrate the power of the proposed prior. Moreover, a high-quality depth map can also be obtained as a byproduct of haze removal.",
"Bad weather, such as fog and haze, can significantly degrade the visibility of a scene. Optically, this is due to the substantial presence of particles in the atmosphere that absorb and scatter light. In computer vision, the absorption and scattering processes are commonly modeled by a linear combination of the direct attenuation and the airlight. Based on this model, a few methods have been proposed, and most of them require multiple input images of a scene, which have either different degrees of polarization or different atmospheric conditions. This requirement is the main drawback of these methods, since in many situations, it is difficult to be fulfilled. To resolve the problem, we introduce an automated method that only requires a single input image. This method is based on two basic observations: first, images with enhanced visibility (or clear-day images) have more contrast than images plagued by bad weather; second, airlight whose variation mainly depends on the distance of objects to the viewer, tends to be smooth. Relying on these two observations, we develop a cost function in the framework of Markov random fields, which can be efficiently optimized by various techniques, such as graph-cuts or belief propagation. The method does not require the geometrical information of the input image, and is applicable for both color and gray images.",
"The performance of existing image dehazing methods is limited by hand-designed features, such as the dark channel, color disparity and maximum contrast, with complex fusion schemes. In this paper, we propose a multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines results locally. To train the multi-scale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed.",
"This paper proposes an image dehazing model built with a convolutional neural network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed based on a re-formulated atmospheric scattering model. Instead of estimating the transmission matrix and the atmospheric light separately as most previous models did, AOD-Net directly generates the clean image through a light-weight CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other deep models, e.g., Faster R-CNN, for improving high-level tasks on hazy images. Experimental results on both synthesized and natural hazy image datasets demonstrate our superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality. Furthermore, when concatenating AOD-Net with Faster R-CNN, we witness a large improvement of the object detection performance on hazy images.",
"Photographs of hazy scenes typically suffer having low contrast and offer a limited visibility of the scene. This article describes a new method for single-image dehazing that relies on a generic regularity in natural images where pixels of small image patches typically exhibit a 1D distribution in RGB color space, known as color-lines. We derive a local formation model that explains the color-lines in the context of hazy scenes and use it for recovering the scene transmission based on the lines' offset from the origin. The lack of a dominant color-line inside a patch or its lack of consistency with the formation model allows us to identify and avoid false predictions. Thus, unlike existing approaches that follow their assumptions across the entire image, our algorithm validates its hypotheses and obtains more reliable estimates where possible. In addition, we describe a Markov random field model dedicated to producing complete and regularized transmission maps given noisy and scattered estimates. Unlike traditional field models that consist of local coupling, the new model is augmented with long-range connections between pixels of similar attributes. These connections allow our algorithm to properly resolve the transmission in isolated regions where nearby pixels do not offer relevant information. An extensive evaluation of our method over different types of images and its comparison to state-of-the-art methods over established benchmark images show a consistent improvement in the accuracy of the estimated scene transmission and recovered haze-free radiances.",
"Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use.",
"We propose a new end-to-end single image dehazing method, called Densely Connected Pyramid Dehazing Network (DCPDN), which can jointly learn the transmission map, atmospheric light and dehazing all together. The end-to-end learning is achieved by directly embedding the atmospheric scattering model into the network, thereby ensuring that the proposed method strictly follows the physics-driven scattering model for dehazing. Inspired by the dense network that can maximize the information flow along features from different levels, we propose a new edge-preserving densely connected encoder-decoder structure with multi-level pyramid pooling module for estimating the transmission map. This network is optimized using a newly introduced edge-preserving loss function. To further incorporate the mutual structural information between the estimated transmission map and the dehazed result, we propose a joint-discriminator based on generative adversarial network framework to decide whether the corresponding dehazed image and the estimated transmission map are real or fake. An ablation study is conducted to demonstrate the effectiveness of each module evaluated at both estimated transmission map and dehazed result. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods. Code will be made available at: this https URL"
]
} |
1904.09059 | 2939440064 | Haze degrades content and obscures information of images, which can negatively impact vision-based decision-making in real-time systems. In this paper, we propose an efficient fully convolutional neural network (CNN) image dehazing method designed to run on edge graphical processing units (GPUs). We utilize three variants of our architecture to explore the dependency of dehazed image quality on parameter count and model design. The first two variants presented, a small and big version, make use of a single efficient encoder-decoder convolutional feature extractor. The final variant utilizes a pair of encoder-decoders for atmospheric light and transmission map estimation. Each variant ends with an image refinement pyramid pooling network to form the final dehazed image. For the big variant of the single-encoder network, we demonstrate state-of-the-art performance on the NYU Depth dataset. For the small variant, we maintain competitive performance on the super-resolution O I-HAZE datasets without the need for image cropping. Finally, we examine some challenges presented by the Dense-Haze dataset when leveraging CNN architectures for dehazing of dense haze imagery and examine the impact of loss function selection on image quality. Benchmarks are included to show the feasibility of introducing this approach into real-time systems. | Generative adversarial networks (GANs) for image style transfer have become increasingly popular in recent years with algorithms such as Pix2Pix @cite_10 and CycleGAN @cite_8 . Haze removal can also be thought of from a style transfer perspective: transferring images from the hazy domain to the haze-free domain. This approach was attempted by Engin al @cite_20 , in which cycle consistency and perceptual losses were combined in a CycleGAN framework. Additionally, approaches from semantic image segmentation, such as feature pyramid networks, have proven to be effective in image dehazing applications. Image segmentation networks often utilize encoder--decoder pairs to learn embedded representations of inputs that take into account multi-scale features. Chaurasia and Culurciello @cite_29 proposed an efficient semantic segmentation architecture based on a fully convolutional encoder--decoder framework. Their encoder uses a ResNet18 model @cite_5 for feature encoding and avoids a loss of spatial information by reintroducing residuals from each encoder to the output of its corresponding decoder. | {
"cite_N": [
"@cite_8",
"@cite_29",
"@cite_5",
"@cite_10",
"@cite_20"
],
"mid": [
"2962793481",
"2735039185",
"2194775991",
"2552465644",
""
],
"abstract": [
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"Pixel-wise semantic segmentation for visual scene understanding not only needs to be accurate, but also efficient in order to find any use in real-time application. Existing algorithms even though are accurate but they do not focus on utilizing the parameters of neural network efficiently. As a result they are huge in terms of parameters and number of operations; hence slow too. In this paper, we propose a novel deep neural network architecture which allows it to learn without any significant increase in number of parameters. Our network uses only 11.5 million parameters and 21.2 GFLOPs for processing an image of resolution 3x640x360. It gives state-of-the-art performance on CamVid and comparable results on Cityscapes dataset. We also compare our networks processing time on NVIDIA GPU and embedded system device with existing state-of-the-art architectures for different image resolutions.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.",
""
]
} |
1904.09059 | 2939440064 | Haze degrades content and obscures information of images, which can negatively impact vision-based decision-making in real-time systems. In this paper, we propose an efficient fully convolutional neural network (CNN) image dehazing method designed to run on edge graphical processing units (GPUs). We utilize three variants of our architecture to explore the dependency of dehazed image quality on parameter count and model design. The first two variants presented, a small and big version, make use of a single efficient encoder-decoder convolutional feature extractor. The final variant utilizes a pair of encoder-decoders for atmospheric light and transmission map estimation. Each variant ends with an image refinement pyramid pooling network to form the final dehazed image. For the big variant of the single-encoder network, we demonstrate state-of-the-art performance on the NYU Depth dataset. For the small variant, we maintain competitive performance on the super-resolution O I-HAZE datasets without the need for image cropping. Finally, we examine some challenges presented by the Dense-Haze dataset when leveraging CNN architectures for dehazing of dense haze imagery and examine the impact of loss function selection on image quality. Benchmarks are included to show the feasibility of introducing this approach into real-time systems. | One challenge in using neural networks for single image dehazing is processing high-resolution input. Several techniques in the 2018 NTIRE Image Dehazing Challenge handled the relatively high-input resolution of the I-HAZE @cite_30 and O-HAZE @cite_0 datasets by cropping input imagery into many smaller frames or downsampling the input imagery and resizing the final outputs @cite_11 . These approaches are limited by total GPU memory and not GPU processing power; therefore, models with fewer parameters are capable of accepting higher-resolution input imagery. | {
"cite_N": [
"@cite_30",
"@cite_0",
"@cite_11"
],
"mid": [
"2962782447",
"",
"2899026108"
],
"abstract": [
"Haze removal or dehazing is a challenging ill-posed problem that has drawn a significant attention in the last few years. Despite this growing interest, the scientific community is still lacking a reference dataset to evaluate objectively and quantitatively the performance of proposed dehazing methods. The few datasets that are currently considered, both for assessment and training of learning-based dehazing techniques, exclusively rely on synthetic hazy images. To address this limitation, we introduce the first outdoor scenes database (named O-HAZE) composed of pairs of real hazy and corresponding haze-free images. In practice, hazy images have been captured in presence of real haze, generated by professional haze machines, and O-HAZE contains 45 different outdoor scenes depicting the same visual content recorded in haze-free and hazy conditions, under the same illumination parameters. To illustrate its usefulness, O-HAZE is used to compare a representative set of state-of-the-art dehazing techniques, using traditional image quality metrics such as PSNR, SSIM and CIEDE2000. This reveals the limitations of current techniques, and questions some of their underlying assumptions.",
"",
"This paper reviews the first challenge on image dehazing (restoration of rich details in hazy image) with focus on proposed solutions and results. The challenge had 2 tracks. Track 1 employed the indoor images (using I-HAZE dataset), while Track 2 outdoor images (using O-HAZE dataset). The hazy images have been captured in presence of real haze, generated by professional haze machines. I-HAZE dataset contains 35 scenes that correspond to indoor domestic environments, with objects with different colors and specularities. O-HAZE contains 45 different outdoor scenes depicting the same visual content recorded in haze-free and hazy conditions, under the same illumination parameters. The dehazing process was learnable through provided pairs of haze-free and hazy train images. Each track had 120 registered participants and 21 teams competed in the final testing phase. They gauge the state-of-the-art in image dehazing."
]
} |
1904.09078 | 2936150952 | Abstract Classification using multimodal data arises in many machine learning applications. It is crucial not only to model cross-modal relationship effectively but also to ensure robustness against loss of part of data or modalities. In this paper, we propose a novel deep learning-based multimodal fusion architecture for classification tasks, which guarantees compatibility with any kind of learning models, deals with cross-modal information carefully, and prevents performance degradation due to partial absence of data. We employ two datasets for multimodal classification tasks, build models based on our architecture and other state-of-the-art models, and analyze their performance on various situations. The results show that our architecture outperforms the other multimodal fusion architectures when some parts of data are not available. | Multimodal data have been widely employed in recent decades @cite_5 . One of the most prominent multimodal data is videos, which consist of image frames and audio signals @cite_22 @cite_24 @cite_42 . In addition, human activity recognition systems usually employ data obtained from multiple sensors, including motion capture system, depth cameras, accelerometers, and microphones @cite_44 @cite_20 @cite_31 . There also exist several multimodal datasets in biological @cite_25 , chemical @cite_49 , or medical @cite_50 researches. | {
"cite_N": [
"@cite_22",
"@cite_42",
"@cite_24",
"@cite_44",
"@cite_49",
"@cite_50",
"@cite_5",
"@cite_31",
"@cite_25",
"@cite_20"
],
"mid": [
"2113814270",
"2250384498",
"1830038491",
"2086509056",
"2086064090",
"1976272929",
"2584561145",
"2073401630",
"1604956938",
"2296311849"
],
"abstract": [
"The multimodal nature of speech is often ignored in human-computer interaction, but lip deformations and other body motion, such as those of the head, convey additional information. We integrate speech cues from many sources and this improves intelligibility, especially when the acoustic signal is degraded. The paper shows how this additional, often complementary, visual speech information can be used for speech recognition. Three methods for parameterizing lip image sequences for recognition using hidden Markov models are compared. Two of these are top-down approaches that fit a model of the inner and outer lip contours and derive lipreading features from a principal component analysis of shape or shape and appearance, respectively. The third, bottom-up, method uses a nonlinear scale-space analysis to form features directly from the pixel intensity. All methods are compared on a multitalker visual speech recognition task of isolated letters.",
"This publicly available curated dataset of almost 100 million photos and videos is free and legal for all.",
"This paper proposes a new method for bimodal information fusion in audio-visual speech recognition, where cross-modal association is considered in two levels. First, the acoustic and the visual data streams are combined at the feature level by using the canonical correlation analysis, which deals with the problems of audio-visual synchronization and utilizing the cross-modal correlation. Second, information streams are integrated at the decision level for adaptive fusion of the streams according to the noise condition of the given speech datum. Experimental results demonstrate that the proposed method is effective for producing noise-robust recognition performance without a priori knowledge about the noise conditions of the speech data.",
"Over the years, a large number of methods have been proposed to analyze human pose and motion information from images, videos, and recently from depth data. Most methods, however, have been evaluated on datasets that were too specific to each application, limited to a particular modality, and more importantly, captured under unknown conditions. To address these issues, we introduce the Berkeley Multimodal Human Action Database (MHAD) consisting of temporally synchronized and geometrically calibrated data from an optical motion capture system, multi-baseline stereo cameras from multiple views, depth sensors, accelerometers and microphones. This controlled multimodal dataset provides researchers an inclusive testbed to develop and benchmark new algorithms across multiple modalities under known capture conditions in various research domains. To demonstrate possible use of MHAD for action recognition, we compare results using the popular Bag-of-Words algorithm adapted to each modality independently with the results of various combinations of modalities using the Multiple Kernel Learning. Our comparative results show that multimodal analysis of human motion yields better action recognition rates than unimodal analysis.",
"Abstract Chemo-resistive transduction presents practical advantages for capturing the spatio-temporal and structural organization of chemical compounds dispersed in different human habitats. In an open sampling system, however, where the chemo-sensory elements are directly exposed to the environment being monitored, the identification and monitoring of chemical substances present a more difficult challenge due to the dispersion mechanisms of gaseous chemical analytes, namely diffusion, turbulence, and advection. The success of such actively changeable practice is influenced by the adequate implementation of algorithmically driven formalisms combined with the appropriate design of experimental protocols. On the basis of this functional joint-formulation, in this study we examine an innovative methodology based on the inhibitory processing mechanisms encountered in the structural assembly of the insect's brain, namely Inhibitory Support Vector Machine (ISVM) applied to training a sensor array platform and evaluate its capabilities relevant to odor detection and identification under complex environmental conditions. We generated — and made publicly available — an extensive and unique dataset with a chemical detection platform consisting of 72 conductometric met al-oxide based chemical sensors in a custom-designed wind tunnel test-bed facility to test our methodology. Our findings suggest that the aforementioned methodology can be a valuable tool to guide the decision of choosing the training conditions for a cost-efficient system calibration as well as an important step toward the understanding of the degradation level of the sensory system when the environmental conditions change.",
"Multimodal medical image fusion is an important task for the retrieval of complementary information from medical images. Shift sensitivity, lack of phase information and poor directionality of real valued wavelet transforms motivated us to use complex wavelet transform for fusion. We have used Daubechies complex wavelet transform (DCxWT) for image fusion which is approximately shift invariant and provides phase information. In the present work, we have proposed a new multimodal medical image fusion using DCxWT at multiple levels which is based on multiresolution principle. The proposed method fuses the complex wavelet coefficients of source images using maximum selection rule. Experiments have been performed over three different sets of multimodal medical images. The proposed fusion method is visually and quantitatively compared with wavelet domain (Dual tree complex wavelet transform (DTCWT), Lifting wavelet transform (LWT), Multiwavelet transform (MWT), Stationary wavelet transform (SWT)) and spatial domain (Principal component analysis (PCA), linear and sharp) image fusion methods. The proposed method is further compared with Contourlet transform (CT) and Nonsubsampled contourlet transform (NSCT) based image fusion methods. For comparison of the proposed method, we have used five fusion metrics, namely entropy, edge strength, standard deviation, fusion factor and fusion symmetry. Comparison results prove that performance of the proposed fusion method is better than any of the above existing fusion methods. Robustness of the proposed method is tested against Gaussian, salt & pepper and speckle noise and the plots of fusion metrics for different noise cases established the superiority of the proposed fusion method.",
"First review on affective computing that is dealing with both unimodal and multimodal analysis.The survey takes into account recent approaches, e.g., embeddings, which are missing from previous reviews.It covers and compares all state-of-the-art methods in details, while most available surveys just quickly describes them. Affective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing, to cognitive and social sciences. With the proliferation of videos posted online (e.g., on YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, and more, affective computing research has increasingly evolved from conventional unimodal analysis to more complex forms of multimodal analysis. This is the primary motivation behind our first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage. In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90 of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal affect analysis, we outline existing methods for fusing information from different modalities. As part of this review, we carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field.",
"There is a growing interest on using ambient and wearable sensors for human activity recognition, fostered by several application domains and wider availability of sensing technologies. This has triggered increasing attention on the development of robust machine learning techniques that exploits multimodal sensor setups. However, unlike other applications, there are no established benchmarking problems for this field. As a matter of fact, methods are usually tested on custom datasets acquired in very specific experimental setups. Furthermore, data is seldom shared between different groups. Our goal is to address this issue by introducing a versatile human activity dataset recorded in a sensor-rich environment. This database was the basis of an open challenge on activity recognition. We report here the outcome of this challenge, as well as baseline performance using different classification techniques. We expect this benchmarking database will motivate other researchers to replicate and outperform the presented results, thus contributing to further advances in the state-of-the-art of activity recognition methods.",
"Down syndrome (DS) is a chromosomal abnormality (trisomy of human chromosome 21) associated with intellectual disability and affecting approximately one in 1000 live births worldwide. The overexpression of genes encoded by the extra copy of a normal chromosome in DS is believed to be sufficient to perturb normal pathways and normal responses to stimulation, causing learning and memory deficits. In this work, we have designed a strategy based on the unsupervised clustering method, Self Organizing Maps (SOM), to identify biologically important differences in protein levels in mice exposed to context fear conditioning (CFC). We analyzed expression levels of 77 proteins obtained from normal genotype control mice and from their trisomic littermates (Ts65Dn) both with and without treatment with the drug memantine. Control mice learn successfully while the trisomic mice fail, unless they are first treated with the drug, which rescues their learning ability. The SOM approach identified reduced subsets of proteins predicted to make the most critical contributions to normal learning, to failed learning and rescued learning, and provides a visual representation of the data that allows the user to extract patterns that may underlie novel biological responses to the different kinds of learning and the response to memantine. Results suggest that the application of SOM to new experimental data sets of complex protein profiles can be used to identify common critical protein responses, which in turn may aid in identifying potentially more effective drug targets.",
"Human action recognition has a wide range of applications including biometrics, surveillance, and human computer interaction. The use of multimodal sensors for human action recognition is steadily increasing. However, there are limited publicly available datasets where depth camera and inertial sensor data are captured at the same time. This paper describes a freely available dataset, named UTD-MHAD, which consists of four temporally synchronized data modalities. These modalities include RGB videos, depth videos, skeleton positions, and inertial signals from a Kinect camera and a wearable inertial sensor for a comprehensive set of 27 human actions. Experimental results are provided to show how this database can be used to study fusion approaches that involve using both depth camera data and inertial sensor data. This public domain dataset is of benefit to multimodality research activities being conducted for human action recognition by various research groups."
]
} |
1904.09078 | 2936150952 | Abstract Classification using multimodal data arises in many machine learning applications. It is crucial not only to model cross-modal relationship effectively but also to ensure robustness against loss of part of data or modalities. In this paper, we propose a novel deep learning-based multimodal fusion architecture for classification tasks, which guarantees compatibility with any kind of learning models, deals with cross-modal information carefully, and prevents performance degradation due to partial absence of data. We employ two datasets for multimodal classification tasks, build models based on our architecture and other state-of-the-art models, and analyze their performance on various situations. The results show that our architecture outperforms the other multimodal fusion architectures when some parts of data are not available. | Many machine learning-based fusion approaches have been proposed to handle multimodal information for classification tasks. The most widely used conventional techniques are early integration (or data fusion) and late integration (or decision fusion) @cite_45 @cite_41 @cite_21 @cite_14 . Some studies employed both methods to take their benefits simultaneously @cite_41 . | {
"cite_N": [
"@cite_41",
"@cite_14",
"@cite_45",
"@cite_21"
],
"mid": [
"2053101950",
"1959353080",
"2134867751",
"1963753144"
],
"abstract": [
"This survey aims at providing multimedia researchers with a state-of-the-art overview of fusion strategies, which are used for combining multiple modalities in order to accomplish various multimedia analysis tasks. The existing literature on multimodal fusion research is presented through several classifications based on the fusion methodology and the level of fusion (feature, decision, and hybrid). The fusion methods are described from the perspective of the basic concept, advantages, weaknesses, and their usage in various analysis tasks as reported in the literature. Moreover, several distinctive issues that influence a multimodal fusion process such as, the use of correlation and independence, confidence level, contextual information, synchronization between different modalities, and the optimal modality selection are also highlighted. Finally, we present the open issues for further research in the area of multimodal fusion.",
"High dynamic range (HDR) imaging has been attracting much attention as a technology that can provide immersive experience. Its ultimate goal is to provide better quality of experience (QoE) via enhanced contrast. In this paper, we analyze perceptual experience of tone-mapped HDR videos both explicitly by conducting a subjective questionnaire assessment and implicitly by using EEG and peripheral physiological signals. From the results of the subjective assessment, it is revealed that tone-mapped HDR videos are more interesting and more natural, and give better quality than low dynamic range (LDR) videos. Physiological signals were recorded during watching tone-mapped HDR and LDR videos, and classification systems are constructed to explore perceptual difference captured by the physiological signals. Significant difference in the physiological signals is observed between tone-mapped HDR and LDR videos in the classification under both a subject-dependent and a subject-independent scenarios. Also, significant difference in the signals between high versus low perceived contrast and overall quality is detected via classification under the subject-dependent scenario. Moreover, it is shown that features extracted from the gamma frequency band are effective for classification.",
"Audio-visual speech recognition (AVSR) using acoustic and visual signals of speech has received attention because of its robustness in noisy environments. In this paper, we present a late integration scheme-based AVSR system whose robustness under various noise conditions is improved by enhancing the performance of the three parts composing the system. First, we improve the performance of the visual subsystem by using the stochastic optimization method for the hidden Markov models as the speech recognizer. Second, we propose a new method of considering dynamic characteristics of speech for improved robustness of the acoustic subsystem. Third, the acoustic and the visual subsystems are effectively integrated to produce final robust recognition results by using neural networks. We demonstrate the performance of the proposed methods via speaker-independent isolated word recognition experiments. The results show that the proposed system improves robustness over the conventional system under various noise conditions without a priori knowledge about the noise contained in the speech.",
"Abstract The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45 , 74.37 , 57.74 and 75.94 for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for ‘Depressing’ with 85.46 using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85 with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach."
]
} |
1904.09078 | 2936150952 | Abstract Classification using multimodal data arises in many machine learning applications. It is crucial not only to model cross-modal relationship effectively but also to ensure robustness against loss of part of data or modalities. In this paper, we propose a novel deep learning-based multimodal fusion architecture for classification tasks, which guarantees compatibility with any kind of learning models, deals with cross-modal information carefully, and prevents performance degradation due to partial absence of data. We employ two datasets for multimodal classification tasks, build models based on our architecture and other state-of-the-art models, and analyze their performance on various situations. The results show that our architecture outperforms the other multimodal fusion architectures when some parts of data are not available. | In the early integration method, data of all modalities are concatenated at the initial stage and serve as a single input of the classifier @cite_2 . The data of each modality is usually converted to a feature vector before concatenation via, e.g., principal component analysis @cite_37 and linear discriminant analysis @cite_27 . Since the multimodal data serve as a single vector, any classification models that treat unimodal data can be easily adopted. In addition, the early integration approach considers the cross-modal correlations from the initial stages. However, it assumes perfect synchronization of different modalities, which may not provide the best performance for tasks requiring flexible synchrony modeling, e.g., audio-visual speech recognition @cite_24 @cite_9 . | {
"cite_N": [
"@cite_37",
"@cite_9",
"@cite_24",
"@cite_27",
"@cite_2"
],
"mid": [
"1998481243",
"2153219412",
"1830038491",
"1588662772",
"1989085630"
],
"abstract": [
"Most of the existing approaches of multimodal 2D+3D face recognition exploit the 2D and 3D information at the feature or score level. They do not fully benefit from the dependency between modalities. Exploiting this dependency at the early stage is more effective than the later stage. Early fusion data contains richer information about the input biometric than the compressed features or matching scores. We propose an image recombination for face recognition that explores the dependency between modalities at the image level. Facial cues from the 2D and 3D images are recombined into a more independent and discriminating data by finding transformation axes that account for the maximal amount of variances in the images. We also introduce a complete framework of multimodal 2D+3D face recognition that utilizes the 2D and 3D facial information at the enrollment, image and score levels. Experimental results based on NTU-CSP and Bosphorus 3D face databases show that our face recognition system using image recombination outperforms other face recognition systems based on the pixel- or score-level fusion.",
"Abstract This paper advocates that for some multimodal tasks involving more than one stream of data representing the same sequence of events, it might sometimes be a good idea to be able to desynchronize the streams in order to maximize their joint likelihood. We thus present a novel Hidden Markov Model architecture to model the joint probability of pairs of asynchronous sequences describing the same sequence of events. An Expectation–Maximization algorithm to train the model is presented, as well as a Viterbi decoding algorithm, which can be used to obtain the optimal state sequence as well as the alignment between the two sequences. The model was tested on two audio–visual speech processing tasks, namely speech recognition and text-dependent speaker verification, both using the M2VTS database. Robust performances under various noise conditions were obtained in both cases.",
"This paper proposes a new method for bimodal information fusion in audio-visual speech recognition, where cross-modal association is considered in two levels. First, the acoustic and the visual data streams are combined at the feature level by using the canonical correlation analysis, which deals with the problems of audio-visual synchronization and utilizing the cross-modal correlation. Second, information streams are integrated at the decision level for adaptive fusion of the streams according to the noise condition of the given speech datum. Experimental results demonstrate that the proposed method is effective for producing noise-robust recognition performance without a priori knowledge about the noise conditions of the speech data.",
"Multimodal biometrics has recently attracted substantial interest for its high performance in biometric recognition system. In this paper we introduce multimodal biometrics for face and palmprint images using fusion techniques at the feature level. Gabor based image processing is utilized to extract discriminant features, while principal component analysis (PCA) and linear discriminant analysis (LDA) are used to reduce the dimension of each modality. The output features of LDA are serially combined and classified by a Euclidean distance classifier. The experimental results based on ORL face and Poly-U palmprint databases proved that this fusion technique is able to increase biometric recognition rates compared to that produced by single modal biometrics.",
"Semantic analysis of multimodal video aims to index segments of interest at a conceptual level. In reaching this goal, it requires an analysis of several information streams. At some point in the analysis these streams need to be fused. In this paper, we consider two classes of fusion schemes, namely early fusion and late fusion. The former fuses modalities in feature space, the latter fuses modalities in semantic space. We show by experiment on 184 hours of broadcast video data and for 20 semantic concepts, that late fusion tends to give slightly better performance for most concepts. However, for those concepts where early fusion performs better the difference is more significant."
]
} |
1904.09078 | 2936150952 | Abstract Classification using multimodal data arises in many machine learning applications. It is crucial not only to model cross-modal relationship effectively but also to ensure robustness against loss of part of data or modalities. In this paper, we propose a novel deep learning-based multimodal fusion architecture for classification tasks, which guarantees compatibility with any kind of learning models, deals with cross-modal information carefully, and prevents performance degradation due to partial absence of data. We employ two datasets for multimodal classification tasks, build models based on our architecture and other state-of-the-art models, and analyze their performance on various situations. The results show that our architecture outperforms the other multimodal fusion architectures when some parts of data are not available. | Unlike the early integration method, the late integration method constructs a separate classifier for each modality, trains the classifiers independently, and draws a final decision by combining outputs of the classifiers @cite_21 . Each separate classifier is optimized to the corresponding modality. There are various ways to make a decision in the late integration method, and it is known that combining with the weighted sum rule usually outperforms the multiplication rule @cite_33 . The late integration approach does not share any representations across different modalities, which results in ignoring the correlated characteristics among the modalities. | {
"cite_N": [
"@cite_21",
"@cite_33"
],
"mid": [
"1963753144",
"2158275940"
],
"abstract": [
"Abstract The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45 , 74.37 , 57.74 and 75.94 for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for ‘Depressing’ with 85.46 using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85 with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach.",
"We develop a common theoretical framework for combining classifiers which use distinct pattern representations and show that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision. An experimental comparison of various classifier combination schemes demonstrates that the combination rule developed under the most restrictive assumptions-the sum rule-outperforms other classifier combinations schemes. A sensitivity analysis of the various schemes to estimation errors is carried out to show that this finding can be justified theoretically."
]
} |
1904.09078 | 2936150952 | Abstract Classification using multimodal data arises in many machine learning applications. It is crucial not only to model cross-modal relationship effectively but also to ensure robustness against loss of part of data or modalities. In this paper, we propose a novel deep learning-based multimodal fusion architecture for classification tasks, which guarantees compatibility with any kind of learning models, deals with cross-modal information carefully, and prevents performance degradation due to partial absence of data. We employ two datasets for multimodal classification tasks, build models based on our architecture and other state-of-the-art models, and analyze their performance on various situations. The results show that our architecture outperforms the other multimodal fusion architectures when some parts of data are not available. | Some researchers developed more sophisticated multimodal integration approaches that regulate the degree of contribution of each modality for classification tasks. Bharadwaj @cite_38 proposed a context switching-based person identification system, which prioritizes the multimodal data by measuring their quality and chooses an appropriate classifier. Goswami @cite_17 employed the pool adjacent violators algorithm for combining multiple classifiers, which calibrates the outputs of the classifiers with respect to their confidence values. Choi and Lee @cite_11 developed an activity recognition model, which controls the amount of information for each modality by measuring its reliability. These imply that considering significance of each modality data is beneficial to improve the classification performance. | {
"cite_N": [
"@cite_38",
"@cite_11",
"@cite_17"
],
"mid": [
"308789809",
"2899475803",
"2607930008"
],
"abstract": [
"Abstract Biometrics, the science of verifying the identity of individuals, is increasingly being used in several applications such as assisting law enforcement agencies to control crime and fraud. Existing techniques are unable to provide significant levels of accuracy in uncontrolled noisy environments. Further, scalability is another challenge due to variations in data distribution with changing conditions. This paper presents an adaptive context switching algorithm coupled with online learning to address both these challenges. The proposed framework, termed as QFuse , uses the quality of input images to dynamically select the best biometric matcher or fusion algorithm to verify the identity of an individual. The proposed algorithm continuously updates the selection process using online learning to address the scalability and accommodate the variations in data distribution. The results on the WVU multimodal database and a large real world multimodal database obtained from a law enforcement agency show the efficacy of the proposed framework.",
"Human activity recognition using multimodal sensors is widely studied in recent days. In this paper, we propose an end-to-end deep learning model for activity recognition, which fuses features of multiple modalities based on their confidence scores that are automatically determined. The confidence scores efficiently regulate the level of contribution of each sensor. We conduct an experiment on the latest activity recognition dataset. The results confirm that our model outperforms existing methods. We submit the proposed model to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge [23] with the team name \"Yonsei-MCML.\"",
"Classifier fusion is a well-studied problem in which decisions from multiple classifiers are combined at the score, rank, or decision level to obtain better results than a single classifier. Subsequently, various techniques for combining classifiers at each of these levels have been proposed in the literature. Many popular methods entail scaling and normalizing the scores obtained by each classifier to a common numerical range before combining the normalized scores using the sum rule or another classifier. In this research, we explore an alternative method to combine classifiers at the score level. The Pool Adjacent Violators (PAV) algorithm has traditionally been utilized to convert classifier match scores to confidence values that model posterior probabilities for test data. The PAV algorithm and other score normalization techniques have studied the same problem without being aware of each other. In this first ever study to combine the two, we propose the PAV algorithm for classifier fusion on publicly available NIST multi-modal biometrics score dataset. We observe that it provides several advantages over existing techniques and find that the interpretation learned by the PAV algorithm is more robust than the scaling learned by other popular normalization algorithms such as min-max. Moreover, the PAV algorithm enables the combined score to be interpreted as confidence and is able to further improve the results obtained by other approaches. We also observe that utilizing traditional normalization techniques first for individual classifiers and then normalizing the fused score using PAV offers a performance boost compared to only using the PAV algorithm."
]
} |
1904.09078 | 2936150952 | Abstract Classification using multimodal data arises in many machine learning applications. It is crucial not only to model cross-modal relationship effectively but also to ensure robustness against loss of part of data or modalities. In this paper, we propose a novel deep learning-based multimodal fusion architecture for classification tasks, which guarantees compatibility with any kind of learning models, deals with cross-modal information carefully, and prevents performance degradation due to partial absence of data. We employ two datasets for multimodal classification tasks, build models based on our architecture and other state-of-the-art models, and analyze their performance on various situations. The results show that our architecture outperforms the other multimodal fusion architectures when some parts of data are not available. | Some researches employed data processing to handle missing data in deep learning architectures. Ord 'o n ez and Roggen used a linear interpolation method to handle missing data for human activity recognition tasks @cite_52 . Ngiam trained their audio-visual bimodal deep network by feeding zero values in the case of missing data @cite_4 . Eitel analyzed the pattern of noise existing in the depth data and augmented the training data by the observed noise pattern for object recognition using color and depth images @cite_18 . These methods may partially improve the robustness against missing data, however, only provide simple workarounds and do not solve the issue fundamentally via learning or modeling. Jaques introduced an approach where a pre-trained autoencoder estimates the original values for missing part of the input data @cite_30 . Nevertheless, the multimodal integration may fail if the estimation is not sufficiently accurate. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_4",
"@cite_52"
],
"mid": [
"2787801451",
"2963956866",
"2184188583",
"2270470215"
],
"abstract": [
"To accomplish forecasting of mood in real-world situations, affective computing systems need to collect and learn from multimodal data collected over weeks or months of daily use. Such systems are likely to encounter frequent data loss, e.g. when a phone loses location access, or when a sensor is recharging. Lost data can handicap classifiers trained with all modalities present in the data. This paper describes a new technique for handling missing multimodal data using a specialized denoising autoencoder: the Multimodal Autoencoder (MMAE). Empirical results from over 200 participants and 5500 days of data demonstrate that the MMAE is able to predict the feature values from multiple missing modalities more accurately than reconstruction methods such as principal components analysis (PCA). We discuss several practical benefits of the MMAE's encoding and show that it can provide robust mood prediction even when up to three quarters of the data sources are lost.",
"Robust object recognition is a crucial ingredient of many, if not all, real-world robotics applications. This paper leverages recent progress on Convolutional Neural Networks (CNNs) and proposes a novel RGB-D architecture for object recognition. Our architecture is composed of two separate CNN processing streams - one for each modality - which are consecutively combined with a late fusion network. We focus on learning with imperfect sensor data, a typical problem in real-world robotics tasks. For accurate learning, we introduce a multi-stage training methodology and two crucial ingredients for handling depth data with CNNs. The first, an effective encoding of depth information for CNNs that enables learning without the need for large depth datasets. The second, a data augmentation scheme for robust learning with depth images by corrupting them with realistic noise patterns. We present state-of-the-art results on the RGB-D object dataset [15] and show recognition in challenging RGB-D real-world noisy settings.",
"Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.",
"Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4 on average; outperforming some of the previous reported results by up to 9 . Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation."
]
} |
1904.09120 | 2936801852 | The segmentation of pancreas is important for medical image analysis, yet it faces great challenges of class imbalance, background distractions, and non-rigid geometrical features. To address these difficulties, we introduce a deep Q network (DQN) driven approach with deformable U-Net to accurately segment the pancreas by explicitly interacting with contextual information and extract anisotropic features from pancreas. The DQN-based model learns a context-adaptive localization policy to produce a visually tightened and precise localization bounding box of the pancreas. Furthermore, deformable U-Net captures geometry-aware information of pancreas by learning geometrically deformable filters for feature extraction. The experiments on NIH dataset validate the effectiveness of the proposed framework in pancreas segmentation. | Pancreas Segmentation is one of the hot topics in Medical Image Segmentation, which also includes substructures such as vessel @cite_28 and leision @cite_29 @cite_21 . Deep learning has been widely used in the medical segmentation domain. @cite_18 proposes a registration-free deep learning based segmentation algorithm to segment eight organs including pancreas. @cite_6 proposes a hybrid densely connected UNet, H-DenseUNet, to segment tumors and livers. @cite_16 present Dense-Res-Inception Net to address the general challenges of medical image segmentation. Pancreas has its own features, in order to deal with great anatomical variability of pancreas, multi-pass structures have been proposed for more accurate segmentation. @cite_8 performed a pre-segmented pancreas proposal generating algorithm followed by a proposal refinement convolutional network. This framework is further improved in @cite_1 by segmentation holistically-nested network. @cite_26 @cite_25 designed convolutional network model to localize and segment the pancreas in cyclic manners. In the proposed model, every segmentation stage takes its last segmented zone as its input, and generate a new segmentation map. @cite_2 propose a successive 3D coarse-to-fine segmentation model, consisting of a coarse segmentation network and a fine segmentation network. The coarse-to-fine model utilizes by-pass structure in ResNet @cite_10 and reaches state-of-the-art mean DSC $84.59 .86 | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_16",
"@cite_10",
"@cite_25"
],
"mid": [
"2791680898",
"2618237340",
"855272188",
"2327793514",
"2589409328",
"2771088639",
"2469107318",
"2964227007",
"",
"2799597343",
"2194775991",
"2771252144"
],
"abstract": [
"Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.",
"Deep neural networks have been widely adopted for automatic organ segmentation from abdominal CT scans. However, the segmentation accuracy of some small organs (e.g., the pancreas) is sometimes below satisfaction, arguably because deep networks are easily disrupted by the complex and variable background regions which occupies a large fraction of the input volume. In this paper, we formulate this problem into a fixed-point model which uses a predicted segmentation mask to shrink the input region. This is motivated by the fact that a smaller input region often leads to more accurate segmentation. In the training process, we use the ground-truth annotation to generate accurate input regions and optimize network weights. On the testing stage, we fix the network parameters and update the segmentation results in an iterative manner. We evaluate our approach on the NIH pancreas segmentation dataset, and outperform the state-of-the-art by more than (4 ), measured by the average Dice-Sorensen Coefficient (DSC). In addition, we report (62.43 ) DSC in the worst case, which guarantees the reliability of our approach in clinical applications.",
"Automatic organ segmentation is an important yet challenging problem for medical image analysis. The pancreas is an abdominal organ with very high anatomical variability. This inhibits previous segmentation methods from achieving high accuracies, especially compared to other organs such as the liver, heart or kidneys. In this paper, we present a probabilistic bottom-up approach for pancreas segmentation in abdominal computed tomography CT scans, using multi-level deep convolutional networks ConvNets. We propose and evaluate several variations of deep ConvNets in the context of hierarchical, coarse-to-fine classification on image patches and regions, i.e. superpixels. We first present a dense labeling of local image patches via P-ConvNet and nearest neighbor fusion. Then we describe a regional ConvNet R1-ConvNet that samples a set of bounding boxes around each image superpixel at different scales of contexts in a \"zoom-out\" fashion. Our ConvNets learn to assign class probabilities for each superpixel region of being pancreas. Last, we study a stacked R2-ConvNet leveraging the joint space of CT intensities and the P-ConvNet dense probability maps. Both 3D Gaussian smoothing and 2D conditional random fields are exploited as structured predictions for post-processing. We evaluate on CT images of [InlineEquation not available: see fulltext.] patients in 4-fold cross-validation. We achieve a Dice Similarity Coefficient of 83.6±6.3 in training and 71.8±10.7 in testing.",
"The condition of the vascular network of human eye is an important diagnostic factor in ophthalmology. Its segmentation in fundus imaging is a nontrivial task due to variable size of vessels, relatively low contrast, and potential presence of pathologies like microaneurysms and hemorrhages. Many algorithms, both unsupervised and supervised, have been proposed for this purpose in the past. We propose a supervised segmentation technique that uses a deep neural network trained on a large (up to 400 @math 000) sample of examples preprocessed with global contrast normalization, zero-phase whitening, and augmented using geometric transformations and gamma corrections. Several variants of the method are considered, including structured prediction, where a network classifies multiple pixels simultaneously. When applied to standard benchmarks of fundus imaging, the DRIVE, STARE, and CHASE databases, the networks significantly outperform the previous algorithms on the area under ROC curve measure (up to @math ) and accuracy of classification (up to @math ). The method is also resistant to the phenomenon of central vessel reflex, sensitive in detection of fine vessels ( @math ), and fares well on pathological cases.",
"Abstract In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small ( n ≤ 35 ) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating ( r ≥ 0.97 ) also with the expected lesion volume.",
"A fully automatic technique for segmenting the liver and localizing its unhealthy tissues is a convenient tool in order to diagnose hepatic diseases and assess the response to the according treatments. In this work we propose a method to segment the liver and its lesions from Computed Tomography (CT) scans using Convolutional Neural Networks (CNNs), that have proven good results in a variety of computer vision tasks, including medical imaging. The network that segments the lesions consists of a cascaded architecture, which first focuses on the region of the liver in order to segment the lesions on it. Moreover, we train a detector to localize the lesions, and mask the results of the segmentation network with the positive detections. The segmentation architecture is based on DRIU, a Fully Convolutional Network (FCN) with side outputs that work on feature maps of different resolutions, to finally benefit from the multi-scale information learned by different stages of the network. The main contribution of this work is the use of a detector to localize the lesions, which we show to be beneficial to remove false positives triggered by the segmentation network. Source code and models are available at this https URL .",
"Accurate automatic organ segmentation is an important yet challenging problem for medical image analysis. The pancreas is an abdominal organ with very high anatomical variability. This inhibits traditional segmentation methods from achieving high accuracies, especially compared to other organs such as the liver, heart or kidneys. In this paper, we present a holistic learning approach that integrates semantic mid-level cues of deeply-learned organ interior and boundary maps via robust spatial aggregation using random forest. Our method generates boundary preserving pixel-wise class labels for pancreas segmentation. Quantitative evaluation is performed on CT scans of 82 patients in 4-fold cross-validation. We achieve a (mean ± std. dev.) Dice Similarity Coefficient of 78.01 ±8.2 in testing which significantly outperforms the previous state-of-the-art approach of 71.8 ±10.7 under the same evaluation criterion.",
"Liver cancer is one of the leading causes of cancer death. To assist doctors in hepatocellular carcinoma diagnosis and treatment planning, an accurate and automatic liver and tumor segmentation method is highly demanded in the clinical practice. Recently, fully convolutional neural networks (FCNs), including 2-D and 3-D FCNs, serve as the backbone in many volumetric image segmentation. However, 2-D convolutions cannot fully leverage the spatial information along the third dimension while 3-D convolutions suffer from high computational cost and GPU memory consumption. To address these issues, we propose a novel hybrid densely connected UNet (H-DenseUNet), which consists of a 2-D DenseUNet for efficiently extracting intra-slice features and a 3-D counterpart for hierarchically aggregating volumetric contexts under the spirit of the auto-context algorithm for liver and tumor segmentation. We formulate the learning process of the H-DenseUNet in an end-to-end manner, where the intra-slice representations and inter-slice features can be jointly optimized through a hybrid feature fusion layer. We extensively evaluated our method on the data set of the MICCAI 2017 Liver Tumor Segmentation Challenge and 3DIRCADb data set. Our method outperformed other state-of-the-arts on the segmentation results of tumors and achieved very competitive performance for liver segmentation even with a single model.",
"",
"Convolutional neural networks (CNNs) have revolutionized medical image analysis over the past few years. The U-Net architecture is one of the most well-known CNN architectures for semantic segmentation and has achieved remarkable successes in many different medical image segmentation applications. The U-Net architecture consists of standard convolution layers, pooling layers, and upsampling layers. These convolution layers learn representative features of input images and construct segmentations based on the features. However, the features learned by standard convolution layers are not distinctive when the differences among different categories are subtle in terms of intensity, location, shape, and size. In this paper, we propose a novel CNN architecture, called Dense-Res-Inception Net (DRINet), which addresses this challenging problem. The proposed DRINet consists of three blocks, namely a convolutional block with dense connections, a deconvolutional block with residual inception modules, and an unpooling block. Our proposed architecture outperforms the U-Net in three different challenging applications, namely multi-class segmentation of cerebrospinal fluid on brain CT images, multi-organ segmentation on abdominal CT images, and multi-class brain tumor segmentation on MR images.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach [46], which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage. This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy. Experiments in the NIH pancreas segmentation dataset demonstrate the state-of-the-art accuracy, which outperforms the previous best by an average of over 2 . Much higher accuracies are also reported on several small organs in a larger dataset collected by ourselves. In addition, our approach enjoys better convergence properties, making it more efficient and reliable in practice."
]
} |
1904.09120 | 2936801852 | The segmentation of pancreas is important for medical image analysis, yet it faces great challenges of class imbalance, background distractions, and non-rigid geometrical features. To address these difficulties, we introduce a deep Q network (DQN) driven approach with deformable U-Net to accurately segment the pancreas by explicitly interacting with contextual information and extract anisotropic features from pancreas. The DQN-based model learns a context-adaptive localization policy to produce a visually tightened and precise localization bounding box of the pancreas. Furthermore, deformable U-Net captures geometry-aware information of pancreas by learning geometrically deformable filters for feature extraction. The experiments on NIH dataset validate the effectiveness of the proposed framework in pancreas segmentation. | Image segmentation has always been a fundamental and widely discussed problem in computer vision @cite_11 @cite_24 . After Fully convolutional network (FCN) @cite_27 was proposed, numerous deep convolutional networks have been designed to solve pixel-wise segmentation problems. @cite_20 and @cite_9 presented deep encoder-decoder structures to extract features from input image and generate dense segmentation map from feature maps. @cite_31 proposed an elegant network, which consists of multiple cross-layer concatenations, to learn from small amount of data, especially in medical image analysis. However, the geometric transformations are assumed to be fixed and known within these networks. To deal with this problem, deformable convolution is brought up in @cite_22 as an alternative to standard convolution, allowing adaptive deformation in scale and shape of receptive field. | {
"cite_N": [
"@cite_22",
"@cite_9",
"@cite_24",
"@cite_27",
"@cite_31",
"@cite_20",
"@cite_11"
],
"mid": [
"2601564443",
"1745334888",
"1909952827",
"1903029394",
"1901129140",
"2963881378",
"2480078828"
],
"abstract": [
"Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in their building modules. In this work, we introduce two new modules to enhance the transformation modeling capability of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from the target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the performance of our approach. For the first time, we show that learning dense spatial transformation in deep CNNs is effective for sophisticated vision tasks such as object detection and semantic segmentation. The code is released at https: github.com msracver Deformable-ConvNets.",
"We propose a novel semantic segmentation algorithm by learning a deep deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixelwise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction, our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained without using Microsoft COCO dataset through ensemble with the fully convolutional network.",
"Research in texture recognition often concentrates on the problem of material recognition in uncluttered conditions, an assumption rarely met by applications. In this work we conduct a first study of material and describable texture attributes recognition in clutter, using a new dataset derived from the OpenSurface texture repository. Motivated by the challenge posed by this problem, we propose a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank. FV-CNN substantially improves the state-of-the-art in texture, material and scene recognition. Our approach achieves 79.8 accuracy on Flickr material dataset and 81 accuracy on MIT indoor scenes, providing absolute gains of more than 10 over existing approaches. FV-CNN easily transfers across domains without requiring feature adaptation as for methods that build on the fully-connected layers of CNNs. Furthermore, FV-CNN can seamlessly incorporate multi-scale information and describe regions of arbitrary shapes and sizes. Our approach is particularly suited at localizing “stuff” categories and obtains state-of-the-art results on MSRC segmentation dataset, as well as promising results on recognizing materials and surface attributes in clutter on the OpenSurfaces dataset.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .",
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet .",
"We propose a deep Convolutional Neural Network (CNN) for land cover mapping in remote sensing images, with a focus on urban areas. In remote sensing, class imbalance represents often a problem for tasks like land cover mapping, as small objects get less prioritised in an effort to achieve the best overall accuracy. We propose a novel approach to achieve high overall accuracy, while still achieving good accuracy for small objects. Quantifying the uncertainty on a pixel scale is another challenge in remote sensing, especially when using CNNs. In this paper we use recent advances in measuring uncertainty for CNNs and evaluate their quality both qualitatively and quantitatively in a remote sensing context. We demonstrate our ideas on different deep architectures including patch-based and so-called pixel-to-pixel approaches, as well as their combination, by classifying each pixel in a set of aerial images covering Vaihingen, Germany. The results show that we obtain an overall classification accuracy of 87 . The corresponding F1- score for the small object class \"car\" is 80.6 , which is higher than state-of-the art for this dataset."
]
} |
1904.09117 | 2938174633 | We present a self-supervised learning approach for optical flow. Our method distills reliable flow estimations from non-occluded pixels, and uses these predictions as ground truth to learn optical flow for hallucinated occlusions. We further design a simple CNN to utilize temporal information from multiple frames for better flow estimation. These two principles lead to an approach that yields the best performance for unsupervised optical flow learning on the challenging benchmarks including MPI Sintel, KITTI 2012 and 2015. More notably, our self-supervised pre-trained model provides an excellent initialization for supervised fine-tuning. Our fine-tuned models achieve state-of-the-art results on all three datasets. At the time of writing, we achieve EPE=4.26 on the Sintel benchmark, outperforming all submitted methods. | Classical Optical Flow Estimation. Classical variational approaches model optical flow estimation as an energy minimization problem based on brightness constancy and spatial smoothness @cite_17 . Such methods are effective for small motion, but tend to fail when displacements are large. Later works integrate feature matching to initialize sparse matching, and then interpolate into dense flow maps in a pyramidal coarse-to-fine manner @cite_47 @cite_26 @cite_36 . Recent works use convolutional neural networks (CNNs) to improve sparse matching by learning an effective feature embedding @cite_29 @cite_24 . However, these methods are often computationally expensive and can not be trained end-to-end. One natural extension to improve robustness and accuracy for flow estimation is to incorporate temporal information over multiple frames. A straightforward way is to add temporal constraints such as constant velocity @cite_41 @cite_3 @cite_12 , constant acceleration @cite_37 @cite_28 , low-dimensional linear subspace @cite_46 , or rigid non-rigid segmentation @cite_31 . While these formulations are elegant and well-motivated, our method is much simpler and does not rely on any assumption of the data. Instead, our approach directly learns optical flow for a much wider range of challenging cases existing in the data. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_31",
"@cite_36",
"@cite_41",
"@cite_29",
"@cite_28",
"@cite_46",
"@cite_3",
"@cite_24",
"@cite_47",
"@cite_12",
"@cite_17"
],
"mid": [
"2145467074",
"2113221323",
"2611683220",
"1951289974",
"2737529999",
"2609338489",
"2157719643",
"2088871573",
"988642845",
"2963219046",
"2131747574",
"2154220963",
"1578285471"
],
"abstract": [
"Despite the fact that temporal coherence is undeniably one of the key aspects when processing video data, this concept has hardly been exploited in recent optical flow methods. In this paper, we will present a novel parametrization for multi-frame optical flow computation that naturally enables us to embed the assumption of a temporally coherent spatial flow structure, as well as the assumption that the optical flow is smooth along motion trajectories. While the first assumption is realized by expanding spatial regularization over multiple frames, the second assumption is imposed by two novel first- and second-order trajectorial smoothness terms. With respect to the latter, we investigate an adaptive decision scheme that makes a local (per pixel) or global (per sequence) selection of the most appropriate model possible. Experiments show the clear superiority of our approach when compared to existing strategies for imposing temporal coherence. Moreover, we demonstrate the state-of-the-art performance of our method by achieving Top 3 results at the widely used Middlebury benchmark.",
"Optical flow computation is a key component in many computer vision systems designed for tasks such as action detection or activity recognition. However, despite several major advances over the last decade, handling large displacement in optical flow remains an open problem. Inspired by the large displacement optical flow of Brox and Malik, our approach, termed Deep Flow, blends a matching algorithm with a variational approach for optical flow. We propose a descriptor matching algorithm, tailored to the optical flow problem, that allows to boost performance on fast motions. The matching algorithm builds upon a multi-stage architecture with 6 layers, interleaving convolutions and max-pooling, a construction akin to deep convolutional nets. Using dense sampling, it allows to efficiently retrieve quasi-dense correspondences, and enjoys a built-in smoothing effect on descriptors matches, a valuable asset for integration into an energy minimization framework for optical flow estimation. Deep Flow efficiently handles large displacements occurring in realistic videos, and shows competitive performance on optical flow benchmarks. Furthermore, it sets a new state-of-the-art on the MPI-Sintel dataset.",
"The optical flow of natural scenes is a combination of the motion of the observer and the independent motion of objects. Existing algorithms typically focus on either recovering motion and structure under the assumption of a purely static world or optical flow for general unconstrained scenes. We combine these approaches in an optical flow algorithm that estimates an explicit segmentation of moving objects from appearance and physical constraints. In static regions we take advantage of strong constraints to jointly estimate the camera motion and the 3D structure of the scene over multiple frames. This allows us to also regularize the structure instead of the motion. Our formulation uses a Plane+Parallax framework, which works even under small baselines, and reduces the motion estimation to a one-dimensional search problem, resulting in more accurate estimation. In moving regions the flow is treated as unconstrained, and computed with an existing optical flow method. The resulting Mostly-Rigid Flow (MR-Flow) method achieves state-of-the-art results on both the MPI-Sintel and KITTI-2015 benchmarks.",
"We propose a novel approach for optical flow estimation, targeted at large displacements with significant occlusions. It consists of two steps: i) dense matching by edge-preserving interpolation from a sparse set of matches; ii) variational energy minimization initialized with the dense matches. The sparse-to-dense interpolation relies on an appropriate choice of the distance, namely an edge-aware geodesic distance. This distance is tailored to handle occlusions and motion boundaries - two common and difficult issues for optical flow computation. We also propose an approximation scheme for the geodesic distance to allow fast computation without loss of performance. Subsequent to the dense interpolation step, standard one-level variational energy minimization is carried out on the dense matches to obtain the final flow estimation. The proposed approach, called Edge-Preserving Interpolation of Correspondences (EpicFlow) is fast and robust to large displacements. It significantly outperforms the state of the art on MPI-Sintel and performs on par on Kitti and Middlebury.",
"Existing optical flow datasets are limited in size and variability due to the difficulty of capturing dense ground truth. In this paper, we tackle this problem by tracking pixels through densely sampled space-time volumes recorded with a high-speed video camera. Our model exploits the linearity of small motions and reasons about occlusions from multiple frames. Using our technique, we are able to establish accurate reference flow fields outside the laboratory in natural environments. Besides, we show how our predictions can be used to augment the input images with realistic motion blur. We demonstrate the quality of the produced flow fields on synthetic and real-world datasets. Finally, we collect a novel challenging optical flow dataset by applying our technique on data from a high-speed camera and analyze the performance of the state-of-the-art in optical flow under various levels of motion blur.",
"We present an optical flow estimation approach that operates on the full four-dimensional cost volume. This direct approach shares the structural benefits of leading stereo matching pipelines, which are known to yield high accuracy. To this day, such approaches have been considered impractical due to the size of the cost volume. We show that the full four-dimensional cost volume can be constructed in a fraction of a second due to its regularity. We then exploit this regularity further by adapting semi-global matching to the four-dimensional setting. This yields a pipeline that achieves significantly higher accuracy than state-of-the-art optical flow methods while being faster than most. Our approach outperforms all published general-purpose optical flow methods on both Sintel and KITTI 2015 benchmarks.",
"A novel approach to incrementally estimating visual motion over a sequence of images is presented. The authors start by formulating constraints on image motion to account for the possibility of multiple motions. This is achieved by exploiting the notions of weak continuity and robust statistics in the formulation of a minimization problem. The resulting objective function is non-convex. Traditional stochastic relaxation techniques for minimizing such functions prove inappropriate for the task. A highly parallel incremental stochastic minimization algorithm is presented which has a number of advantages over previous approaches. The incremental nature of the scheme makes it dynamic and permits the detection of occlusion and disocclusion boundaries. >",
"Shows that the set of all flow fields in a sequence of frames imaging a rigid scene resides in a low-dimensional linear subspace. Based on this observation, we develop a method for simultaneous estimation of optical flow across multiple frames, which uses these subspace constraints. The multi-frame subspace constraints are strong constraints, and they replace commonly used heuristic constraints, such as spatial or temporal smoothness. The subspace constraints are geometrically meaningful and are not violated at depth discontinuities or when the camera motion changes abruptly. Furthermore, we show that the subspace constraints on flow fields apply for a variety of imaging models, scene models and motion models. Hence, the presented approach for constrained multi-frame flow estimation is general. However, our approach does not require prior knowledge of the underlying world or camera model. Although linear subspace constraints have been used successfully in the past for recovering 3D information, it has been assumed that 2D correspondences are given. However, correspondence estimation is a fundamental problem in motion analysis. In this paper, we use multi-frame subspace constraints to constrain the 2D correspondence estimation process itself, and not for 3D recovery.",
"Optical flow research has made significant progress in recent years and it can now be computed efficiently and accurately for many images. However, complex motions, large displacements, and difficult imaging conditions are still problematic. In this paper, we present a framework for estimating optical flow which leads to improvements on these difficult cases by 1) estimating occlusions and 2) using additional temporal information. First, we divide the image into discrete triangles and show how this allows for occluded regions to be naturally estimated and directly incorporated into the optimization algorithm. We additionally propose a novel method of dealing with temporal information in image sequences by using “inertial estimates” of the flow. These estimates are combined using a classifier-based fusion scheme, which significantly improves results. These contributions are evaluated on three different optical flow datasets, and we achieve state-of-the-art results on MPI-Sintel.",
"Learning based approaches have not yet achieved their full potential in optical flow estimation, where their performance still trails heuristic approaches. In this paper, we present a CNN based patch matching approach for optical flow estimation. An important contribution of our approach is a novel thresholded loss for Siamese networks. We demonstrate that our loss performs clearly better than existing losses. It also allows to speed up training by a factor of 2 in our tests. Furthermore, we present a novel way for calculating CNN based features for different image scales, which performs better than existing methods. We also discuss new ways of evaluating the robustness of trained features for the application of patch matching for optical flow. An interesting discovery in our paper is that low-pass filtering of feature maps can increase the robustness of features created by CNNs. We proved the competitive performance of our approach by submitting it to the KITTI 2012, KITTI 2015 and MPI-Sintel evaluation portals where we obtained state-of-the-art results on all three datasets.",
"Optical flow estimation is classically marked by the requirement of dense sampling in time. While coarse-to-fine warping schemes have somehow relaxed this constraint, there is an inherent dependency between the scale of structures and the velocity that can be estimated. This particularly renders the estimation of detailed human motion problematic, as small body parts can move very fast. In this paper, we present a way to approach this problem by integrating rich descriptors into the variational optical flow setting. This way we can estimate a dense optical flow field with almost the same high accuracy as known from variational optical flow, while reaching out to new domains of motion analysis where the requirement of dense sampling in time is no longer satisfied.",
"Layered models are a powerful way of describing natural scenes containing smooth surfaces that may overlap and occlude each other. For image motion estimation, such models have a long history but have not achieved the wide use or accuracy of non-layered methods. We present a new probabilistic model of optical flow in layers that addresses many of the shortcomings of previous approaches. In particular, we define a probabilistic graphical model that explicitly captures: 1) occlusions and disocclusions; 2) depth ordering of the layers; 3) temporal consistency of the layer segmentation. Additionally the optical flow in each layer is modeled by a combination of a parametric model and a smooth deviation based on an MRF with a robust spatial prior; the resulting model allows roughness in layers. Finally, a key contribution is the formulation of the layers using an image-dependent hidden field prior based on recent models for static scene segmentation. The method achieves state-of-the-art results on the Middlebury benchmark and produces meaningful scene segmentations as well as detected occlusion regions.",
"Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantified rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image."
]
} |
1904.09117 | 2938174633 | We present a self-supervised learning approach for optical flow. Our method distills reliable flow estimations from non-occluded pixels, and uses these predictions as ground truth to learn optical flow for hallucinated occlusions. We further design a simple CNN to utilize temporal information from multiple frames for better flow estimation. These two principles lead to an approach that yields the best performance for unsupervised optical flow learning on the challenging benchmarks including MPI Sintel, KITTI 2012 and 2015. More notably, our self-supervised pre-trained model provides an excellent initialization for supervised fine-tuning. Our fine-tuned models achieve state-of-the-art results on all three datasets. At the time of writing, we achieve EPE=4.26 on the Sintel benchmark, outperforming all submitted methods. | Supervised Learning of Optical Flow. One promising direction is to learn optical flow with CNNs. FlowNet @cite_20 is the first end-to-end optical flow learning framework. It takes two consecutive images as input and outputs a dense flow map. The following work FlowNet 2.0 @cite_8 stacks several basic FlowNet models for iterative refinement, and significantly improves the accuracy. SpyNet @cite_33 proposes to warp images at multiple scales to cope with large displacements, resulting in a compact spatial pyramid network. Recently, PWC-Net @cite_18 and LiteFlowNet @cite_13 propose to warp features extracted from CNNs and achieve state-of-the-art results with lightweight framework. However, obtaining high accuracy with these CNNs requires pre-training on multiple synthetic datasets and follows specific training schedules @cite_20 @cite_50 . In this paper, we reduce the reliance on pre-training with synthetic data, and propose an effective self-supervised training method with unlabeled data. | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_8",
"@cite_50",
"@cite_13",
"@cite_20"
],
"mid": [
"2963782415",
"2548527721",
"2953296820",
"2259424905",
"2798976292",
"764651262"
],
"abstract": [
"We present a compact but effective CNN model for optical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established principles: pyramidal processing, warping, and the use of a cost volume. Cast in a learnable feature pyramid, PWC-Net uses the current optical flow estimate to warp the CNN features of the second image. It then uses the warped features and features of the first image to construct a cost volume, which is processed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in size and easier to train than the recent FlowNet2 model. Moreover, it outperforms all published optical flow methods on the MPI Sintel final pass and KITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024 A— 436) images. Our models are available on our project website.",
"We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96 smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (",
"The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a sub-network specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50 . It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.",
"Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.",
"FlowNet2, the state-of-the-art convolutional neural network (CNN) for optical flow estimation, requires over 160M parameters to achieve accurate flow estimation. In this paper we present an alternative network that outperforms FlowNet2 on the challenging Sintel final pass and KITTI benchmarks, while being 30 times smaller in the model size and 1.36 times faster in the running speed. This is made possible by drilling down to architectural details that might have been missed in the current frameworks: (1) We present a more effective flow inference approach at each pyramid level through a lightweight cascaded network. It not only improves flow estimation accuracy through early correction, but also permits seamless incorporation of descriptor matching in our network. (2) We present a novel flow regularization layer to ameliorate the issue of outliers and vague flow boundaries by using a feature-driven local convolution. (3) Our network owns an effective structure for pyramidal feature extraction and embraces feature warping rather than image warping as practiced in FlowNet2. Our code and trained models are available at this https URL .",
"Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps."
]
} |
1904.09117 | 2938174633 | We present a self-supervised learning approach for optical flow. Our method distills reliable flow estimations from non-occluded pixels, and uses these predictions as ground truth to learn optical flow for hallucinated occlusions. We further design a simple CNN to utilize temporal information from multiple frames for better flow estimation. These two principles lead to an approach that yields the best performance for unsupervised optical flow learning on the challenging benchmarks including MPI Sintel, KITTI 2012 and 2015. More notably, our self-supervised pre-trained model provides an excellent initialization for supervised fine-tuning. Our fine-tuned models achieve state-of-the-art results on all three datasets. At the time of writing, we achieve EPE=4.26 on the Sintel benchmark, outperforming all submitted methods. | Unsupervised Learning of Optical Flow. Another interesting line of work is unsupervised optical flow learning. The basic principles are based on brightness constancy and spatial smoothness @cite_39 @cite_27 . This leads to the most popular photometric loss, which measures the difference between the reference image and the warped image. Unfortunately, this loss does not hold for occluded pixels. Recent studies propose to first obtain an occlusion map and then exclude those occluded pixels when computing the photometric difference @cite_48 @cite_34 . Janai al @cite_11 introduces to estimate optical flow with a multi-frame formulation and more advanced occlusion reasoning, achieving state-of-the-art unsupervised results. Very recently, DDFlow @cite_23 proposes a data distillation approach to learning the optical flow of occluded pixels, which works particularly well for pixels near image boundaries. Nonetheless, all these unsupervised learning methods only handle specific cases of occluded pixels. They lack the ability to reason about the optical flow of all possible occluded pixels. In this work, we address this issue by a superpixel-based occlusion hallucination technique. | {
"cite_N": [
"@cite_48",
"@cite_39",
"@cite_27",
"@cite_23",
"@cite_34",
"@cite_11"
],
"mid": [
"2963891416",
"2507953016",
"2604909019",
"2904340070",
"2770424797",
"2894983388"
],
"abstract": [
"",
"Recently, convolutional networks (convnets) have proven useful for predicting optical flow. Much of this success is predicated on the availability of large datasets that require expensive and involved data acquisition and laborious labeling. To bypass these challenges, we propose an unsupervised approach (i.e., without leveraging groundtruth flow) to train a convnet end-to-end for predicting optical flow between two images. We use a loss function that combines a data term that measures photometric constancy over time with a spatial term that models the expected variation of flow across the image. Together these losses form a proxy measure for losses based on the groundtruth flow. Empirically, we show that a strong convnet baseline trained with the proposed unsupervised approach outperforms the same network trained with supervision on the KITTI dataset.",
"",
"We present DDFlow, a data distillation approach to learning optical flow estimation from unlabeled data. The approach distills reliable predictions from a teacher network, and uses these predictions as annotations to guide a student network to learn optical flow. Unlike existing work relying on handcrafted energy terms to handle occlusion, our approach is data-driven, and learns optical flow for occluded pixels. This enables us to train our model with a much simpler loss function, and achieve a much higher accuracy. We conduct a rigorous evaluation on the challenging Flying Chairs, MPI Sintel, KITTI 2012 and 2015 benchmarks, and show that our approach significantly outperforms all existing unsupervised learning methods, while running at real time.",
"It has been recently shown that a convolutional neural network can learn optical flow estimation with unsupervised learning. However, the performance of the unsupervised methods still has a relatively large gap compared to its supervised counterpart. Occlusion and large motion are some of the major factors that limit the current unsupervised learning of optical flow methods. In this work we introduce a new method which models occlusion explicitly and a new warping way that facilitates the learning of large motion. Our method shows promising results on Flying Chairs, MPI-Sintel and KITTI benchmark datasets. Especially on KITTI dataset where abundant unlabeled samples exist, our unsupervised method outperforms its counterpart trained with supervised learning.",
"Learning optical flow with neural networks is hampered by the need for obtaining training data with associated ground truth. Unsupervised learning is a promising direction, yet the performance of current unsupervised methods is still limited. In particular, the lack of proper occlusion handling in commonly used data terms constitutes a major source of error. While most optical flow methods process pairs of consecutive frames, more advanced occlusion reasoning can be realized when considering multiple frames. In this paper, we propose a framework for unsupervised learning of optical flow and occlusions over multiple frames. More specifically, we exploit the minimal configuration of three frames to strengthen the photometric loss and explicitly reason about occlusions. We demonstrate that our multi-frame, occlusion-sensitive formulation outperforms existing unsupervised two-frame methods and even produces results on par with some fully supervised methods."
]
} |
1904.09117 | 2938174633 | We present a self-supervised learning approach for optical flow. Our method distills reliable flow estimations from non-occluded pixels, and uses these predictions as ground truth to learn optical flow for hallucinated occlusions. We further design a simple CNN to utilize temporal information from multiple frames for better flow estimation. These two principles lead to an approach that yields the best performance for unsupervised optical flow learning on the challenging benchmarks including MPI Sintel, KITTI 2012 and 2015. More notably, our self-supervised pre-trained model provides an excellent initialization for supervised fine-tuning. Our fine-tuned models achieve state-of-the-art results on all three datasets. At the time of writing, we achieve EPE=4.26 on the Sintel benchmark, outperforming all submitted methods. | Self-Supervised Learning. Our work is closely related to the family of self-supervised learning methods, where the supervision signal is purely generated from the data itself. It is widely used for learning feature representations from unlabeled data @cite_6 . A pretext task is usually employed, such as image inpainting @cite_42 , image colorization @cite_25 , solving Jigsaw puzzles @cite_16 . Pathak al @cite_44 propose to explore low-level motion-based cues to learn feature representations without manual supervision. Doersch al @cite_5 combine multiple self-supervised learning tasks to train a single visual representation. In this paper, we make use of the domain knowledge of optical flow, and take reliable predictions of non-occluded pixels as the self-supervision signal to guide our optical flow learning of occluded pixels. | {
"cite_N": [
"@cite_42",
"@cite_6",
"@cite_44",
"@cite_5",
"@cite_16",
"@cite_25"
],
"mid": [
"2342877626",
"2911779594",
"2575671312",
"",
"2321533354",
"2949891561"
],
"abstract": [
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
"Large-scale labeled data are generally required to train deep neural networks in order to obtain better performance in visual feature learning from images or videos for computer vision applications. To avoid extensive cost of collecting and annotating large-scale datasets, as a subset of unsupervised learning methods, self-supervised learning methods are proposed to learn general image and video features from large-scale unlabeled data without using any human-annotated labels. This paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos. First, the motivation, general pipeline, and terminologies of this field are described. Then the common deep neural network architectures that used for self-supervised learning are summarized. Next, the main components and evaluation metrics of self-supervised learning methods are reviewed followed by the commonly used image and video datasets and the existing self-supervised visual feature learning methods. Finally, quantitative performance comparisons of the reviewed methods on benchmark datasets are summarized and discussed for both image and video feature learning. At last, this paper is concluded and lists a set of promising future directions for self-supervised visual feature learning.",
"This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.",
"",
"We propose a novel unsupervised learning approach to build features suitable for object detection and classification. The features are pre-trained on a large dataset without human annotation and later transferred via fine-tuning on a different, smaller and labeled dataset. The pre-training consists of solving jigsaw puzzles of natural images. To facilitate the transfer of features to other tasks, we introduce the context-free network (CFN), a siamese-ennead convolutional neural network. The features correspond to the columns of the CFN and they process image tiles independently (i.e., free of context). The later layers of the CFN then use the features to identify their geometric arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. We pre-train the CFN on the training set of the ILSVRC2012 dataset and transfer the features on the combined training and validation set of Pascal VOC 2007 for object detection (via fast RCNN) and classification. These features outperform all current unsupervised features with (51.8 , ) for detection and (68.6 , ) for classification, and reduce the gap with supervised learning ( (56.5 , ) and (78.2 , ) respectively).",
"We investigate and improve self-supervision as a drop-in replacement for ImageNet pretraining, focusing on automatic colorization as the proxy task. Self-supervised training has been shown to be more promising for utilizing unlabeled data than other, traditional unsupervised learning methods. We build on this success and evaluate the ability of our self-supervised network in several contexts. On VOC segmentation and classification tasks, we present results that are state-of-the-art among methods not using ImageNet labels for pretraining representations. Moreover, we present the first in-depth analysis of self-supervision via colorization, concluding that formulation of the loss, training details and network architecture play important roles in its effectiveness. This investigation is further expanded by revisiting the ImageNet pretraining paradigm, asking questions such as: How much training data is needed? How many labels are needed? How much do features change when fine-tuned? We relate these questions back to self-supervision by showing that colorization provides a similarly powerful supervisory signal as various flavors of ImageNet pretraining."
]
} |
1904.08901 | 2938983867 | It is a common narrative that blockchains are immutable and so it is technically impossible to erase data stored on them. For legal and ethical reasons, however, individuals and organizations might be compelled to erase locally stored data, be it encoded on a blockchain or not. The common assumption for blockchain networks like Bitcoin is that forcing nodes to erase data contained on the blockchain is equal to permanently restricting them from participating in the system in a full-node role. Challenging this belief, in this paper, we propose and demonstrate a pragmatic approach towards functionality-preserving local erasure (FPLE). FPLE enables full nodes to erase infringing or undesirable data while continuing to store and validate most of the blockchain. We describe a general FPLE approach for UTXO-based (i.e., Bitcoin-like) cryptocurrencies and present a lightweight proof-of-concept tool for safely erasing transaction data from the local storage of Bitcoin Core nodes. Erasing nodes continue to operate in tune with the network even when erased transaction outputs become relevant for validating subsequent blocks. Using only our basic proof-of-concept implementation, we are already able to safely comply with a significantly larger range of erasure requests than, to the best of our knowledge, any other full node operator so far. | It is no secret that arbitrary data can be included on blockchains---generic non-financial data was included as early as in Bitcoin's genesis block. Non-financial data storage on blockchains enables innovative new services such as name services https: www.namecoin.info , timestamping https: opentimestamps.org , pseudonymous identities @cite_13 and non-equivocation logging @cite_30 (to name just a few examples). Recent results, however, demonstrate that an uncensorable data storage service like Bitcoin can also be abused, respectively that some of the data stored on it might not be universally well looked upon @cite_22 . A range of solutions exist for alleviating this conflict. They can be grouped into the categories: avoiding the inclusion of unwanted data, allowing the modification (and erasure) of past blockchain state, and local pruning. FPLE can be seen as an improvement to local pruning. Most importantly, we aim at solving the data erasure challenge node-locally instead of on a global level. | {
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_22"
],
"mid": [
"2671457145",
"2027274379",
"2790401422"
],
"abstract": [
"We present Catena, an efficiently-verifiable Bitcoinwitnessing scheme. Catena enables any number of thin clients, such as mobile phones, to efficiently agree on a log of application-specific statements managed by an adversarial server. Catenaimplements a log as an OP_RETURN transaction chain andprevents forks in the log by leveraging Bitcoin's security againstdouble spends. Specifically, if a log server wants to equivocate ithas to double spend a Bitcoin transaction output. Thus, Catenalogs are as hard to fork as the Bitcoin blockchain: an adversarywithout a large fraction of the network's computational powercannot fork Bitcoin and thus cannot fork a Catena log either. However, different from previous Bitcoin-based work, Catenadecreases the bandwidth requirements of log auditors from 90GB to only tens of megabytes. More precisely, our clients onlyneed to download all Bitcoin block headers (currently less than35 MB) and a small, 600-byte proof for each statement in a block. We implement Catena in Java using the bitcoinj library and use itto extend CONIKS, a recent key transparency scheme, to witnessits public-key directory in the Bitcoin blockchain where it can beefficiently verified by auditors. We show that Catena can securemany systems today, such as public-key directories, Tor directoryservers and software transparency schemes.",
"The issuing of pseudonyms is an established approach for protecting the privacy of users while limiting access and preventing sybil attacks. To prevent pseudonym deanonymization through continuous observation and correlation, frequent and unlinkable pseudonym changes must be enabled. Existing approaches for realizing sybil-resistant pseudonymization and pseudonym change (PPC) are either inherently dependent on trusted third parties (TTPs) or involve significant computation overhead at end-user devices. In this paper, we investigate a novel, TTP-independent approach towards sybil-resistant PPC. Our proposal is based on the use of cryptocurrency block chains as general-purpose, append-only bulletin boards. We present a general approach as well as BitNym, a specific design based on the unmodified Bitcoin network. We discuss and propose TTP-independent mechanisms for realizing sybil-free initial access control, pseudonym validation and pseudonym mixing. Evaluation results demonstrate the practical feasibility of our approach and show that anonymity sets encompassing nearly the complete user population are easily achievable.",
"Blockchains primarily enable credible accounting of digital events, e.g., money transfers in cryptocurrencies. However, beyond this original purpose, blockchains also irrevocably record arbitrary data, ranging from short messages to pictures. This does not come without risk for users as each participant has to locally replicate the complete blockchain, particularly including potentially harmful content. We provide the first systematic analysis of the benefits and threats of arbitrary blockchain content. Our analysis shows that certain content, e.g., illegal pornography, can render the mere possession of a blockchain illegal. Based on these insights, we conduct a thorough quantitative and qualitative analysis of unintended content on Bitcoin’s blockchain. Although most data originates from benign extensions to Bitcoin’s protocol, our analysis reveals more than 1600 files on the blockchain, over 99 of which are texts or images. Among these files there is clearly objectionable content such as links to child pornography, which is distributed to all Bitcoin participants. With our analysis, we thus highlight the importance for future blockchain designs to address the possibility of unintended data insertion and protect blockchain users accordingly."
]
} |
1904.08901 | 2938983867 | It is a common narrative that blockchains are immutable and so it is technically impossible to erase data stored on them. For legal and ethical reasons, however, individuals and organizations might be compelled to erase locally stored data, be it encoded on a blockchain or not. The common assumption for blockchain networks like Bitcoin is that forcing nodes to erase data contained on the blockchain is equal to permanently restricting them from participating in the system in a full-node role. Challenging this belief, in this paper, we propose and demonstrate a pragmatic approach towards functionality-preserving local erasure (FPLE). FPLE enables full nodes to erase infringing or undesirable data while continuing to store and validate most of the blockchain. We describe a general FPLE approach for UTXO-based (i.e., Bitcoin-like) cryptocurrencies and present a lightweight proof-of-concept tool for safely erasing transaction data from the local storage of Bitcoin Core nodes. Erasing nodes continue to operate in tune with the network even when erased transaction outputs become relevant for validating subsequent blocks. Using only our basic proof-of-concept implementation, we are already able to safely comply with a significantly larger range of erasure requests than, to the best of our knowledge, any other full node operator so far. | @cite_9 , discuss various approaches for preventing the insertion of arbitrary, potentially unwanted data onto cryptocurrency blockchains. Their proposals include content detectors, which filter transactions based on heuristics and knowledge about commonly used data insertion methods, as well as protocol modifications that would greatly increase the costs of including arbitrary data. Approaches along these lines have also surfaced in the non-academic cryptocurrency community See, e. ,g.: http: comments.gmane.org gmane.comp.bitcoin.devel 1996 . Approaches for avoiding the insertion of unwanted data depend on global adoption, for an effective filtering, and in some cases also on protocol changes when applied to existing networks. In contrast, FPLE requires only a node-local decision and is in this way both more practical and enables the incorporation of a wider range of individual preferences and constraints. Lastly, as can be seen in related application domains such as malware detection or digital rights protection via upload filtering, content-based filtering is never completely circumvention-proof. Once something "slips through", an erasure possibility again becomes necessary. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2803736556"
],
"abstract": [
"Since the introduction of Bitcoin in 2008, blockchain systems have seen an enormous increase in adoption. By providing a persistent, distributed, and append-only ledger, blockchains enable numerous applications such as distributed consensus, robustness against equivocation, and smart contracts. However, recent studies show that blockchain systems such as Bitcoin can be (mis) used to store arbitrary content. This has already been used to store arguably objectionable content on Bitcoin's blockchain. Already single instances of clearly objectionable or even illegal content can put the whole system at risk by making its node operators culpable. To overcome this imminent risk, we survey and discuss the design space of countermeasures against the insertion of such objectionable content. Our analysis shows a wide spectrum of potential countermeasures, which are often combinable for increased efficiency. First, we investigate special-purpose content detectors as an ad hoc mitigation. As they turn out to be easily evadable, we also investigate content-agnostic countermeasures. We find that mandatory minimum fees as well as mitigation of transaction manipulability via identifier commitments significantly raise the bar for inserting harmful content into a blockchain."
]
} |
1904.08901 | 2938983867 | It is a common narrative that blockchains are immutable and so it is technically impossible to erase data stored on them. For legal and ethical reasons, however, individuals and organizations might be compelled to erase locally stored data, be it encoded on a blockchain or not. The common assumption for blockchain networks like Bitcoin is that forcing nodes to erase data contained on the blockchain is equal to permanently restricting them from participating in the system in a full-node role. Challenging this belief, in this paper, we propose and demonstrate a pragmatic approach towards functionality-preserving local erasure (FPLE). FPLE enables full nodes to erase infringing or undesirable data while continuing to store and validate most of the blockchain. We describe a general FPLE approach for UTXO-based (i.e., Bitcoin-like) cryptocurrencies and present a lightweight proof-of-concept tool for safely erasing transaction data from the local storage of Bitcoin Core nodes. Erasing nodes continue to operate in tune with the network even when erased transaction outputs become relevant for validating subsequent blocks. Using only our basic proof-of-concept implementation, we are already able to safely comply with a significantly larger range of erasure requests than, to the best of our knowledge, any other full node operator so far. | When considering data protection as a reason for erasure (cf. sub:data_protection ), it also noteworthy that a large body of works deal with the challenge of providing to blockchain users (see e. ,g. @cite_24 for a recent survey). However, most transactions in popular systems like Bitcoin do not use any additional means of increasing anonymity @cite_23 and are reidentifiable using well-known techniques @cite_24 . Even when strong privacy guarantees can be achieved through technical means, this provides no solution for cases where identifiable data is posted to the blockchain on purpose, e. ,g., as part of doxing. | {
"cite_N": [
"@cite_24",
"@cite_23"
],
"mid": [
"2624307925",
"2731497859"
],
"abstract": [
"Bitcoin is a popular cryptocurrency that records all transactions in a distributed append-only public ledger called blockchain . The security of Bitcoin heavily relies on the incentive-compatible proof-of-work (PoW) based distributed consensus protocol, which is run by the network nodes called miners . In exchange for the incentive, the miners are expected to maintain the blockchain honestly. Since its launch in 2009, Bitcoin economy has grown at an enormous rate, and it is now worth about 150 billions of dollars. This exponential growth in the market value of bitcoins motivate adversaries to exploit weaknesses for profit, and researchers to discover new vulnerabilities in the system, propose countermeasures, and predict upcoming trends. In this paper, we present a systematic survey that covers the security and privacy aspects of Bitcoin. We start by giving an overview of the Bitcoin system and its major components along with their functionality and interactions within the system. We review the existing vulnerabilities in Bitcoin and its major underlying technologies such as blockchain and PoW-based consensus protocol. These vulnerabilities lead to the execution of various security threats to the standard functionality of Bitcoin. We then investigate the feasibility and robustness of the state-of-the-art security solutions. Additionally, we discuss the current anonymity considerations in Bitcoin and the privacy-related threats to Bitcoin users along with the analysis of the existing privacy-preserving solutions. Finally, we summarize the critical open challenges, and we suggest directions for future research towards provisioning stringent security and privacy solutions for Bitcoin.",
"This paper contributes a systematic account of transaction anonymization techniques that do not require trust in a single entity and support the existing cryptographic currency Bitcoin. It surveys and compares four known techniques, proposes tailored metrics to identify the use of each technique (but not necessarily its users), and presents longitudinal measurements indicating adoption trends and teething troubles. There is a tradeoff between the choice of users' preferred protection mechanisms and the risk that pertaining transactions can be singled out, which hurts privacy due to smaller anonymity sets unless a critical mass adopts the mechanism."
]
} |
1904.08901 | 2938983867 | It is a common narrative that blockchains are immutable and so it is technically impossible to erase data stored on them. For legal and ethical reasons, however, individuals and organizations might be compelled to erase locally stored data, be it encoded on a blockchain or not. The common assumption for blockchain networks like Bitcoin is that forcing nodes to erase data contained on the blockchain is equal to permanently restricting them from participating in the system in a full-node role. Challenging this belief, in this paper, we propose and demonstrate a pragmatic approach towards functionality-preserving local erasure (FPLE). FPLE enables full nodes to erase infringing or undesirable data while continuing to store and validate most of the blockchain. We describe a general FPLE approach for UTXO-based (i.e., Bitcoin-like) cryptocurrencies and present a lightweight proof-of-concept tool for safely erasing transaction data from the local storage of Bitcoin Core nodes. Erasing nodes continue to operate in tune with the network even when erased transaction outputs become relevant for validating subsequent blocks. Using only our basic proof-of-concept implementation, we are already able to safely comply with a significantly larger range of erasure requests than, to the best of our knowledge, any other full node operator so far. | is a widely used technique for locally erasing older parts of a blockchain, mainly with the goal of reducing storage requirements. While related, our local erasure approach differs in its goal---we erase individual data chunks instead of the whole history before a certain point---and provides solutions for outstanding challenges such as the pruning of data potentially relevant for validating future blocks. The latter challenge is highly relevant in practice as problematic data is often encoded in unspent but potentially spendable transaction outputs @cite_22 . | {
"cite_N": [
"@cite_22"
],
"mid": [
"2790401422"
],
"abstract": [
"Blockchains primarily enable credible accounting of digital events, e.g., money transfers in cryptocurrencies. However, beyond this original purpose, blockchains also irrevocably record arbitrary data, ranging from short messages to pictures. This does not come without risk for users as each participant has to locally replicate the complete blockchain, particularly including potentially harmful content. We provide the first systematic analysis of the benefits and threats of arbitrary blockchain content. Our analysis shows that certain content, e.g., illegal pornography, can render the mere possession of a blockchain illegal. Based on these insights, we conduct a thorough quantitative and qualitative analysis of unintended content on Bitcoin’s blockchain. Although most data originates from benign extensions to Bitcoin’s protocol, our analysis reveals more than 1600 files on the blockchain, over 99 of which are texts or images. Among these files there is clearly objectionable content such as links to child pornography, which is distributed to all Bitcoin participants. With our analysis, we thus highlight the importance for future blockchain designs to address the possibility of unintended data insertion and protect blockchain users accordingly."
]
} |
1904.08889 | 2938428612 | We present Kernel Point Convolution (KPConv), a new design of point convolution, i.e. that operates on point clouds without any intermediate representation. The convolution weights of KPConv are located in Euclidean space by kernel points, and applied to the input points close to them. Its capacity to use any number of kernel points gives KPConv more flexibility than fixed grid convolutions. Furthermore, these locations are continuous in space and can be learned by the network. Therefore, KPConv can be extended to deformable convolutions that learn to adapt kernel points to local geometry. Thanks to a regular subsampling strategy, KPConv is also efficient and robust to varying densities. Whether they use deformable KPConv for complex tasks, or rigid KPconv for simpler tasks, our networks outperform state-of-the-art classification and segmentation approaches on several datasets. We also offer ablation studies and visualizations to provide understanding of what has been learned by KPConv and to validate the descriptive power of deformable KPConv. | Pointwise CNN @cite_21 locates the kernel weights with voxel bins, and thus lacks flexibility like grid networks. Furthermore, their normalization strategy burdens their network with unnecessary computations, while KPConv subsampling strategy alleviates both varying densities and computational cost. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2795374598"
],
"abstract": [
"Deep learning with 3D data such as reconstructed point clouds and CAD models has received great research interests recently. However, the capability of using point clouds with convolutional neural network has been so far not fully explored. In this paper, we present a convolutional neural network for semantic segmentation and object recognition with 3D point clouds. At the core of our network is point-wise convolution, a new convolution operator that can be applied at each point of a point cloud. Our fully convolutional network design, while being surprisingly simple to implement, can yield competitive accuracy in both semantic segmentation and object recognition task."
]
} |
1904.08889 | 2938428612 | We present Kernel Point Convolution (KPConv), a new design of point convolution, i.e. that operates on point clouds without any intermediate representation. The convolution weights of KPConv are located in Euclidean space by kernel points, and applied to the input points close to them. Its capacity to use any number of kernel points gives KPConv more flexibility than fixed grid convolutions. Furthermore, these locations are continuous in space and can be learned by the network. Therefore, KPConv can be extended to deformable convolutions that learn to adapt kernel points to local geometry. Thanks to a regular subsampling strategy, KPConv is also efficient and robust to varying densities. Whether they use deformable KPConv for complex tasks, or rigid KPconv for simpler tasks, our networks outperform state-of-the-art classification and segmentation approaches on several datasets. We also offer ablation studies and visualizations to provide understanding of what has been learned by KPConv and to validate the descriptive power of deformable KPConv. | SpiderCNN @cite_17 defines its kernel as a family of polynomial functions applied with a different weight for each neighbor. The weight applied to a neighbor depends on the neighbor's distance-wise order, making the filters spatially inconsistent. By contrast, KPConv weights are located in space and its result is invariant to point order. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2963158438"
],
"abstract": [
"Deep neural networks have enjoyed remarkable success for various vision tasks, however it remains challenging to apply CNNs to domains lacking a regular underlying structures such as 3D point clouds. Towards this we propose a novel convolutional architecture, termed SpiderCNN, to efficiently extract geometric features from point clouds. SpiderCNN is comprised of units called SpiderConv, which extend convolutional operations from regular grids to irregular point sets that can be embedded in ( R ^n ), by parametrizing a family of convolutional filters. We design the filter as a product of a simple step function that captures local geodesic information and a Taylor polynomial that ensures the expressiveness. SpiderCNN inherits the multi-scale hierarchical architecture from classical CNNs, which allows it to extract semantic deep features. Experiments on ModelNet40 demonstrate that SpiderCNN achieves state-of-the-art accuracy (92.4 ) on standard benchmarks, and shows competitive performance on segmentation task."
]
} |
1904.08889 | 2938428612 | We present Kernel Point Convolution (KPConv), a new design of point convolution, i.e. that operates on point clouds without any intermediate representation. The convolution weights of KPConv are located in Euclidean space by kernel points, and applied to the input points close to them. Its capacity to use any number of kernel points gives KPConv more flexibility than fixed grid convolutions. Furthermore, these locations are continuous in space and can be learned by the network. Therefore, KPConv can be extended to deformable convolutions that learn to adapt kernel points to local geometry. Thanks to a regular subsampling strategy, KPConv is also efficient and robust to varying densities. Whether they use deformable KPConv for complex tasks, or rigid KPconv for simpler tasks, our networks outperform state-of-the-art classification and segmentation approaches on several datasets. We also offer ablation studies and visualizations to provide understanding of what has been learned by KPConv and to validate the descriptive power of deformable KPConv. | Flex-convolution @cite_64 uses linear functions to model its kernel, which could limit its representative power. It also uses KNN, which is not robust to varying densities as discussed above. | {
"cite_N": [
"@cite_64"
],
"mid": [
"2558748708"
],
"abstract": [
"Many scientific fields study data with an underlying structure that is non-Euclidean. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural-language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure and in cases where the invariances of these structures are built into networks used to model them."
]
} |
1904.08889 | 2938428612 | We present Kernel Point Convolution (KPConv), a new design of point convolution, i.e. that operates on point clouds without any intermediate representation. The convolution weights of KPConv are located in Euclidean space by kernel points, and applied to the input points close to them. Its capacity to use any number of kernel points gives KPConv more flexibility than fixed grid convolutions. Furthermore, these locations are continuous in space and can be learned by the network. Therefore, KPConv can be extended to deformable convolutions that learn to adapt kernel points to local geometry. Thanks to a regular subsampling strategy, KPConv is also efficient and robust to varying densities. Whether they use deformable KPConv for complex tasks, or rigid KPconv for simpler tasks, our networks outperform state-of-the-art classification and segmentation approaches on several datasets. We also offer ablation studies and visualizations to provide understanding of what has been learned by KPConv and to validate the descriptive power of deformable KPConv. | PCNN @cite_20 design is the closest to KPConv. Its definition also uses points to carry kernel weights, and a correlation function. However, this design is not scalable because it does not use any form of neighborhood, making the convolution computations quadratic on the number of points. In addition, it uses a Gaussian correlation where KPConv uses a simpler linear correlation, which helps gradient backpropagation when learning deformations @cite_33 . | {
"cite_N": [
"@cite_33",
"@cite_20"
],
"mid": [
"2601564443",
"2794561444"
],
"abstract": [
"Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in their building modules. In this work, we introduce two new modules to enhance the transformation modeling capability of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from the target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the performance of our approach. For the first time, we show that learning dense spatial transformation in deep CNNs is effective for sophisticated vision tasks such as object detection and semantic segmentation. The code is released at https: github.com msracver Deformable-ConvNets.",
"This paper presents Point Convolutional Neural Networks (PCNN): a novel framework for applying convolutional neural networks to point clouds. The framework consists of two operators: extension and restriction, mapping point cloud functions to volumetric functions and vise-versa. A point cloud convolution is defined by pull-back of the Euclidean volumetric convolution via an extension-restriction mechanism. The point cloud convolution is computationally efficient, invariant to the order of points in the point cloud, robust to different samplings and varying densities, and translation invariant, that is the same convolution kernel is used at all points. PCNN generalizes image CNNs and allows readily adapting their architectures to the point cloud setting. Evaluation of PCNN on three central point cloud learning benchmarks convincingly outperform competing point cloud learning methods, and the vast majority of methods working with more informative shape representations such as surfaces and or normals."
]
} |
1904.08926 | 2937115320 | Abstract With the increasing use of the Internet and mobile devices, social networks are becoming the most used media to communicate citizens' ideas and thoughts. This information is very useful to identify communities with common ideas based on what they publish in the network. This paper presents a method to automatically detect city communities based on machine learning techniques applied to a set of tweets from Bogota’s citizens. An analysis was performed in a collection of 2,634,176 tweets gathered from Twitter in a period of six months. Results show that the proposed method is an interesting tool to characterize a city population based on a machine learning methods and text analytics. | Other non common methods have also been used to study texts from Twitter, like @cite_11 , in which Formal Concept Analysis (FCA) @cite_13 @cite_31 , a mathematical application of lattices and ordered sets to the process of concept formation, was used as an alternative approach to topic detection. Authors use FCA because it deals with several problems that the traditional methods suffer such as the unknown number of topics, the difficulty of these methods to adapt to new topics, among others. | {
"cite_N": [
"@cite_31",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"1964090097",
"2520925521"
],
"abstract": [
"",
"Abstract “Concept Lattice” is the central notion of “Formal Concept Analysis”, a new area of research which is based on a set-theoretical model for concepts and conceptual hierarchies. This model yields not only a new approach to data analysis but also methods for formal representation of conceptual knowledge. These methods are outlined on three levels. First, basics on concept lattices are explained starting from simple data contexts which consist of a binary relation between objects and attributes indicating which object has which attribute. On the second level, conceptual relationships are discussed for data matrices which assign attribute values to each of the given objects. Finally, a mathematical model for conceptual knowledge systems is described. This model allows us to study mathematically the representation, inference, acquisition, and communication of conceptual knowledge.",
"We propose a novel approach based on Formal Concept Analysis for Topic Detection.Our proposal overcomes traditional problems of the clustering and classification techniques.We analyse the parameters involved in the process in a Twitter-based framework.We propose a topic selection methodology based on the stability concept.We overcome the state-of-the-art results for the task. The Topic Detection Task in Twitter represents an indispensable step in the analysis of text corpora and their later application in Online Reputation Management. Classification, clustering and probabilistic techniques have been traditionally applied, but they have some well-known drawbacks such as the need to fix the number of topics to be detected or the problem of how to integrate the prior knowledge of topics with the detection of new ones. This motivates the current work, where we present a novel approach based on Formal Concept Analysis (FCA), a fully unsupervised methodology to group similar content together in thematically-based topics (i.e., the FCA formal concepts) and to organize them in the form of a concept lattice. Formal concepts are conceptual representations based on the relationships between tweet terms and the tweets that have given rise to them. It allows, in contrast to other approaches in the literature, their clear interpretability. In addition, the concept lattice represents a formalism that describes the data, explores correlations, similarities, anomalies and inconsistencies better than other representations such as clustering models or graph-based representations. Our rationale is that these theoretical advantages may improve the Topic Detection process, making them able to tackle the problems related to the task. To prove this point, our FCA-based proposal is evaluated in the context of a real-life Topic Detection task provided by the Replab 2013 CLEF Campaign. To demonstrate the efficiency of the proposal, we have carried out several experiments focused on testing: (a) the impact of terminology selection as an input to our algorithm, (b) the impact of concept selection as the outcome of our algorithm, and; (c) the efficiency of the proposal to detect new and previously unseen topics (i.e., topic adaptation). An extensive analysis of the results has been carried out, proving the suitability of our proposal to integrate previous knowledge of prior topics without losing the ability to detect novel and unseen topics as well as improving the best Replab 2013 results."
]
} |
1904.08926 | 2937115320 | Abstract With the increasing use of the Internet and mobile devices, social networks are becoming the most used media to communicate citizens' ideas and thoughts. This information is very useful to identify communities with common ideas based on what they publish in the network. This paper presents a method to automatically detect city communities based on machine learning techniques applied to a set of tweets from Bogota’s citizens. An analysis was performed in a collection of 2,634,176 tweets gathered from Twitter in a period of six months. Results show that the proposed method is an interesting tool to characterize a city population based on a machine learning methods and text analytics. | It is worth noting that the majority of work has been done using English text corpora with valuable results @cite_15 . However, Spanish is significantly a more inflected language than English, and this difference could pose problems. For example, supervised machine learning methods for the topic classification of annotated Spanish tweets modeled with @math -grams have shown to be insufficient @cite_28 . Better attempts to deal with the several research branches of topic detection and opinion-mining in Spanish have taken place. For instance, in the field of opinion mining, studies like @cite_33 proposed a lexicon-based model that adapts to specific domains in Spanish for polarity classification of film reviews. Also, polarity classification in Spanish tweets has been treated in @cite_24 , where hybrid systems that bring together knowledge from lexical, syntactic and semantic structures in the Spanish language, as well as machine learning techniques used with the bag-of-words representation, have shown improvements over the sole bag of words approach. Besides, the clever creation of a corpus called MeSiento (Spanish for I feel'') @cite_26 allowed a robust unsupervised method for polarity classification of Spanish tweets that reached accuracy levels close to the ones obtained with supervised algorithms. | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_28",
"@cite_24",
"@cite_15"
],
"mid": [
"99223961",
"2126908664",
"2100676387",
"2048747536",
"2492922441"
],
"abstract": [
"This work presents a novel method for the generation of a knowledge base oriented to Sentiment Analysis from the continuous stream of published micro-blogs in social media services like Twitter. The method is simple in its approach and has shown to be effective compared to other knowledge based methods for Polarity Classification. Due to independence from language, the method has been tested on different Spanish corpora, with a minimal effort in the lexical resources involved. Although for two of the three studied corpora the obtained results did not improve those officially obtained on the same corpora, it should be noted that this is an unsupervised approach and the accuracy levels achieved were close to those levels obtained with well-known supervised algorithms.",
"A lexicon-based domain adaptation method is proposed.Several domain polar lexicons were compiled following a corpus-based approach.The new resources are assessed over a Spanish corpus.The promising results encourage us to follow improving this domain adaptation method. One of the problems of opinion mining is the domain adaptation of the sentiment classifiers. There are several approaches to tackling this problem. One of these is the integration of a list of opinion bearing words for the specific domain. This paper presents the generation of several resources for domain adaptation to polarity detection. On the other hand, the lack of resources in languages different from English has orientated our work towards developing sentiment lexicons for polarity classifiers in Spanish. The results show the validity of the new sentiment lexicons, which can be used as part of a polarity classifier.",
"A signicant amount of eort is been invested in constructing eective solutions for sentiment analysis and topic detection, but mostly for English texts. Using a corpus of Spanish tweets, we present a comparative analysis of dierent approaches and classication",
"We describe a system that classifies the polarity of Spanish tweets. We adopt a hybrid approach, which combines machine learning and linguistic knowledge acquired by means of NLP. We use part-of-speech tags, syntactic dependencies and semantic knowledge as features for a supervised classifier. Lexical particularities of the language used in Twitter are taken into account in a pre-processing step. Experimental results improve over those of pure machine learning approaches and confirm the practical utility of the proposal.",
"With the advent of Internet, people actively express their opinions about products, services, events, political parties, etc., in social media, blogs, and website comments. The amount of research work on sentiment analysis is growing explosively. However, the majority of research efforts are devoted to English-language data, while a great share of information is available in other languages. We present a state-of-the-art review on multilingual sentiment analysis. More importantly, we compare our own implementation of existing approaches on common data. Precision observed in our experiments is typically lower than the one reported by the original authors, which we attribute to the lack of detail in the original presentation of those approaches. Thus, we compare the existing works by what they really offer to the reader, including whether they allow for accurate implementation and for reliable reproduction of the reported results."
]
} |
1904.08926 | 2937115320 | Abstract With the increasing use of the Internet and mobile devices, social networks are becoming the most used media to communicate citizens' ideas and thoughts. This information is very useful to identify communities with common ideas based on what they publish in the network. This paper presents a method to automatically detect city communities based on machine learning techniques applied to a set of tweets from Bogota’s citizens. An analysis was performed in a collection of 2,634,176 tweets gathered from Twitter in a period of six months. Results show that the proposed method is an interesting tool to characterize a city population based on a machine learning methods and text analytics. | Nonetheless, these methods applied to sentiment analysis tasks depend a lot on annotated dictionaries which do not contain Bogotá's jargon. As a matter of fact, few attempts have been made to make small topic-specific dictionaries such as @cite_43 , where the political sentiment towards Bogotá mayoral candidates for 2015 was analyzed using Twitter and a political sentiment dictionary defined in the Colombian political context. A second study briefly examined the sentiment of tweets from Bogotá with words related with health symptoms @cite_19 . A third study examined the results of 2015 Colombian regional elections and compared them with political ideology and Twitter activity of the candidates @cite_16 . | {
"cite_N": [
"@cite_19",
"@cite_43",
"@cite_16"
],
"mid": [
"2395021364",
"",
"2483704788"
],
"abstract": [
"With the amount of data available on social networks, new methodologies for the analysis of information are needed. Some methods allow the users to combine different types of data in order to extract relevant information. In this context, the present paper shows the application of a model via a platform in order to group together information generated by Twitter users, thus facilitating the detection of trends and data related to particular symptoms. In order to implement the model, an analyzing tool that uses the Levenshtein distance was developed, to determine exactly what is required to convert a text into the following texts: ’gripa’-”flu”, ”dolor de cabeza”-”headache”, ’dolor de estomago’”stomachache”, ’fiebre’-”fever” and ’tos’”cough” in the area of Bogota. Among the information collected, identifiable patterns emerged for each one of the texts.",
"",
"Abstract Propagation of political ideologies in social networks has shown a substantial impact on voting behavior. Both the contents of the messages (the ideology) and the politicians' influence on their online audiences (their followers) have been associated with such an impact. In this study we evaluate which of these factors exerted a major role in deciding electoral results of the 2015 Colombian regional elections by evaluating the linguistic similarity of political ideologies and their influence on the Twitter sphere. The electoral results proved to be strongly associated with tweets and retweets and not with the linguistic content of their ideologies or politicians' followers in Twitter. Finally, suggestions for new ways to analyze electoral processes are discussed."
]
} |
1904.08926 | 2937115320 | Abstract With the increasing use of the Internet and mobile devices, social networks are becoming the most used media to communicate citizens' ideas and thoughts. This information is very useful to identify communities with common ideas based on what they publish in the network. This paper presents a method to automatically detect city communities based on machine learning techniques applied to a set of tweets from Bogota’s citizens. An analysis was performed in a collection of 2,634,176 tweets gathered from Twitter in a period of six months. Results show that the proposed method is an interesting tool to characterize a city population based on a machine learning methods and text analytics. | In the end, it is clear that an unsupervised model for text representation is needed to give robust topic-independent text representations. One of the most successful and widely used text representation models is Word2Vec, which has proven to give good results regardless of the language in opinion mining and topic detection duties. For instance, @cite_41 combined Word2Vec and a bag-of-words document classifier, and showed that Word2Vec provided word embeddings that produced more stable results when doing cross-domain classification experiments. Also, since Word2Vec was first introduced, there has been some research trying to improve and fine-tune word embeddings. Such is the case of @cite_6 that proposes a hybrid model between skip-gram model and continuous bag of words (CBOW) called mixed word embedding (MWE). All in all, we choose word embedding models such as Word2Vec for being able to embed semantic similarities between words in a similarity metric defined over a Euclidean vector space. | {
"cite_N": [
"@cite_41",
"@cite_6"
],
"mid": [
"2509219942",
"2404244834"
],
"abstract": [
"Vector-based word representations can help to improve a document classifier.The information of word2vec vectors and bags of words are very complementary.The combination of word2vec and BOW word representations obtains the best results.Word2vec is much more stable than bag of words models in cross-domain experiments. In this paper we show how a vector-based word representation obtained via word2vec can help to improve the results of a document classifier based on bags of words. Both models allow obtaining numeric representations from texts, but they do it very differently. The bag of words model can represent documents by means of widely dispersed vectors in which the indices are words or groups of words. word2vec generates word level representations building vectors that are much more compact, where indices implicitly contain information about the context of word occurrences. Bags of words are very effective for document classification and in our experiments no representation using only word2vec vectors is able to improve their results. However, this does not mean that the information provided by word2vec is not useful for the classification task. When this information is used in combination with the bags of words, the results are improved, showing its complementarity and its contribution to the task. We have also performed cross-domain experiments in which word2vec has shown much more stable behavior than bag of words models.",
"Learning distributed word representations has been a popular method for various natural language processing applications such as word analogy and similarity, document classification and sentiment analysis. However, most existing word embedding models only exploit a shallow slide window as the context to predict the target word. Because the semantic of each word is also influenced by its global context, as the distributional models usually induced the word representations from the global co-occurrence matrix, the window-based models are insufficient to capture semantic knowledge. In this paper, we propose a novel hybrid model called mixed word embedding (MWE) based on the well-known word2vec toolbox. Specifically, the proposed MWE model combines the two variants of word2vec, i.e., SKIP-GRAM and CBOW, in a seamless way via sharing a common encoding structure, which is able to capture the syntax information of words more accurately. Furthermore, it incorporates a global text vector into the CBOW variant so as to capture more semantic information. Our MWE preserves the same time complexity as the SKIP-GRAM. To evaluate our MWE model efficiently and adaptively, we study our model on linguistic and application perspectives with both English and Chinese dataset. For linguistics, we conduct empirical studies on word analogies and similarities. The learned latent representations on both document classification and sentiment analysis are considered for application point of view of this work. The experimental results show that our MWE model is very competitive in all tasks as compared with the state-of-the-art word embedding models such as CBOW, SKIP-GRAM, and GloVe."
]
} |
1904.08926 | 2937115320 | Abstract With the increasing use of the Internet and mobile devices, social networks are becoming the most used media to communicate citizens' ideas and thoughts. This information is very useful to identify communities with common ideas based on what they publish in the network. This paper presents a method to automatically detect city communities based on machine learning techniques applied to a set of tweets from Bogota’s citizens. An analysis was performed in a collection of 2,634,176 tweets gathered from Twitter in a period of six months. Results show that the proposed method is an interesting tool to characterize a city population based on a machine learning methods and text analytics. | Advances in sentiment analysis have been reported in recent works such as @cite_15 , where state of the art methods were surveyed and compared. Deep learning methods are being used in works such as @cite_0 to analyze sentiments in Persian texts. Deep convolutional neural networks have been also investigated to analyze sentiments in Twitter @cite_40 . Deep learning based methods have been used to detect malicious accounts in location-based social networks @cite_14 . One recent work used a Bayesian network and fuzzy recurrent neural networks for detecting subjectivity @cite_36 . | {
"cite_N": [
"@cite_14",
"@cite_36",
"@cite_0",
"@cite_40",
"@cite_15"
],
"mid": [
"2901141332",
"2734205292",
"2886884189",
"2781487490",
"2492922441"
],
"abstract": [
"Our daily lives have been immersed in widespread location-based social networks (LBSNs). As an open platform, LBSNs typically allow all kinds of users to register accounts. Malicious attackers can easily join and post misleading information, often with the intention of influencing users' decisions in urban computing environments. To provide reliable information and improve the experience for legitimate users, we design and implement DeepScan, a malicious account detection system for LBSNs. Different from existing approaches, DeepScan leverages emerging deep learning technologies to learn users' dynamic behavior. In particular, we introduce the long short-term memory (LSTM) neural network to conduct time series analysis of user activities. DeepScan combines newly introduced time series features and a set of conventional features extracted from user activities, and exploits a supervised machine-learning-based model for detection. Using real traces collected from Dianping, a representative LBSN, we demonstrate that DeepScan can achieve excellent prediction performance with an F1-score of 0.964. We also find that the time series features play a critical role in the detection system.",
"Abstract Subjectivity detection is a task of natural language processing that aims to remove ‘factual’ or ‘neutral’ content, i.e., objective text that does not contain any opinion, from online product reviews. Such a pre-processing step is crucial to increase the accuracy of sentiment analysis systems, as these are usually optimized for the binary classification task of distinguishing between positive and negative content. In this paper, we extend the extreme learning machine (ELM) paradigm to a novel framework that exploits the features of both Bayesian networks and fuzzy recurrent neural networks to perform subjectivity detection. In particular, Bayesian networks are used to build a network of connections among the hidden neurons of the conventional ELM configuration in order to capture dependencies in high-dimensional data. Next, a fuzzy recurrent neural network inherits the overall structure generated by the Bayesian networks to model temporal features in the predictor. Experimental results confirmed the ability of the proposed framework to deal with standard subjectivity detection problems and also proved its capacity to address portability across languages in translation tasks.",
"The rise of social media is enabling people to freely express their opinions about products and services. The aim of sentiment analysis is to automatically determine subject’s sentiment (e.g., positive, negative, or neutral) towards a particular aspect such as topic, product, movie, news etc. Deep learning has recently emerged as a powerful machine learning technique to tackle a growing demand of accurate sentiment analysis. However, limited work has been conducted to apply deep learning algorithms to languages other than English, such as Persian. In this work, two deep learning models (deep autoencoders and deep convolutional neural networks (CNNs)) are developed and applied to a novel Persian movie reviews dataset. The proposed deep learning models are analyzed and compared with the state-of-the-art shallow multilayer perceptron (MLP) based machine learning model. Simulation results demonstrate the enhanced performance of deep learning over state-of-the-art MLP.",
"Twitter sentiment analysis technology provides the methods to survey public emotion about the events or products related to them. Most of the current researches are focusing on obtaining sentiment features by analyzing lexical and syntactic features. These features are expressed explicitly through sentiment words, emoticons, exclamation marks, and so on. In this paper, we introduce a word embeddings method obtained by unsupervised learning based on large twitter corpora, this method using latent contextual semantic relationships and co-occurrence statistical characteristics between words in tweets. These word embeddings are combined with n-grams features and word sentiment polarity score features to form a sentiment feature set of tweets. The feature set is integrated into a deep convolution neural network for training and predicting sentiment classification labels. We experimentally compare the performance of our model with the baseline model that is a word n-grams model on five Twitter data sets, the results indicate that our model performs better on the accuracy and F1-measure for twitter sentiment classification.",
"With the advent of Internet, people actively express their opinions about products, services, events, political parties, etc., in social media, blogs, and website comments. The amount of research work on sentiment analysis is growing explosively. However, the majority of research efforts are devoted to English-language data, while a great share of information is available in other languages. We present a state-of-the-art review on multilingual sentiment analysis. More importantly, we compare our own implementation of existing approaches on common data. Precision observed in our experiments is typically lower than the one reported by the original authors, which we attribute to the lack of detail in the original presentation of those approaches. Thus, we compare the existing works by what they really offer to the reader, including whether they allow for accurate implementation and for reliable reproduction of the reported results."
]
} |
1904.08926 | 2937115320 | Abstract With the increasing use of the Internet and mobile devices, social networks are becoming the most used media to communicate citizens' ideas and thoughts. This information is very useful to identify communities with common ideas based on what they publish in the network. This paper presents a method to automatically detect city communities based on machine learning techniques applied to a set of tweets from Bogota’s citizens. An analysis was performed in a collection of 2,634,176 tweets gathered from Twitter in a period of six months. Results show that the proposed method is an interesting tool to characterize a city population based on a machine learning methods and text analytics. | With regard to one of the particular objectives of our work: detecting communities, several methods have been developed in the last couple of decades to solve the so-called planted @math -partition model, where the structure of graphs are studied to find densely connected groups of nodes (see refs @cite_34 @cite_5 for excellent reviews). More modern methods based on embedding communities in low-dimensional vector spaces try to solve problems such as node clustering, node classification, low-dimensional visualizations, edges prediction, among others with great success @cite_22 @cite_1 . However, we shall point out that this is a very active area of research with many facets, and as argued in @cite_21 , community detection should not be considered as a well-defined problem, but instead, should be motivated by particular reasons. In this sense, our motivation for detecting communities is to find groups of people with a clear topic of interest, regardless of whether such groups of people follow each other on Twitter. This means that we do not know from the beginning any connection between the nodes (users), and we aim to detect communities solely based on the data that characterizes each node, i.e. the text representation of each user's tweets. | {
"cite_N": [
"@cite_22",
"@cite_21",
"@cite_1",
"@cite_5",
"@cite_34"
],
"mid": [
"2767849480",
"2776498699",
"",
"2497752945",
"1995996823"
],
"abstract": [
"In this paper, we study an important yet largely under-explored setting of graph embedding, i.e., embedding communities instead of each individual nodes. We find that community embedding is not only useful for community-level applications such as graph visualization, but also beneficial to both community detection and node classification. To learn such embedding, our insight hinges upon a closed loop among community embedding, community detection and node embedding. On the one hand, node embedding can help improve community detection, which outputs good communities for fitting better community embedding. On the other hand, community embedding can be used to optimize the node embedding by introducing a community-aware high-order proximity. Guided by this insight, we propose a novel community embedding framework that jointly solves the three tasks together. We evaluate such a framework on multiple real-world datasets, and show that it improves graph visualization and outperforms state-of-the-art baselines in various application tasks, e.g., community detection and node classification.",
"A precise definition of what constitutes a community in networks has remained elusive. Consequently, network scientists have compared community detection algorithms on benchmark networks with a particular form of community structure and classified them based on the mathematical techniques they employ. However, this comparison can be misleading because apparent similarities in their mathematical machinery can disguise different reasons for why we would want to employ community detection in the first place. Here we provide a focused review of these different motivations that underpin community detection. This problem-driven classification is useful in applied network science, where it is important to select an appropriate algorithm for the given purpose. Moreover, highlighting the different approaches to community detection also delineates the many lines of research and points out open directions and avenues for future research.",
"",
"Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms' computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm's predicting power and the effective computing time.",
"Uncovering the community structure exhibited by real networks is a crucial step toward an understanding of complex systems that goes beyond the local organization of their constituents. Many algorithms have been proposed so far, but none of them has been subjected to strict tests to evaluate their performance. Most of the sporadic tests performed so far involved small networks with known community structure and or artificial graphs with a simplified structure, which is very uncommon in real systems. Here we test several methods against a recently introduced class of benchmark graphs, with heterogeneous distributions of degree and community size. The methods are also tested against the benchmark by Girvan and Newman [Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)] and on random graphs. As a result of our analysis, three recent algorithms introduced by Rosvall and Bergstrom [Proc. Natl. Acad. Sci. U.S.A. 104, 7327 (2007); Proc. Natl. Acad. Sci. U.S.A. 105, 1118 (2008)], [J. Stat. Mech.: Theory Exp. (2008), P10008], and Ronhovde and Nussinov [Phys. Rev. E 80, 016109 (2009)] have an excellent performance, with the additional advantage of low computational complexity, which enables one to analyze large systems."
]
} |
1904.08916 | 2939927288 | Injuries are a major cost in sports. Teams spend millions of dollars every year on players who are hurt and unable to play, resulting in lost games, decreased fan interest and additional wages for replacement players. Modern convolutional neural networks have been successfully applied to many video recognition tasks. In this paper, we introduce the problem of injury detection prediction in MLB pitchers and experimentally evaluate the ability of such convolutional models to detect and predict injuries in pitches only from video data. We conduct experiments on a large dataset of TV broadcast MLB videos of 20 different pitchers who were injured during the 2017 season. We experimentally evaluate the model's performance on each individual pitcher, how well it generalizes to new pitchers, how it performs for various injuries, and how early it can predict or detect an injury. | Video activity recognition is a popular research topic in computer vision @cite_22 @cite_28 @cite_32 @cite_23 @cite_20 . Early works focused on hand-crafted features, such as dense trajectories @cite_23 and showed promising results. Recently, convolutional neural networks (CNNs) have out-performed the hand-crafted approaches @cite_26 . A standard multi-stream CNN approaches takes input of RGB frames and optical flows @cite_32 @cite_33 or RGB frames at different frame-rates @cite_5 which are used for classification, capturing different features. 3D (spatio-temproal) convolutional models have been trained for activity recognition tasks @cite_8 @cite_26 @cite_6 . To train these CNN models, large scale datasets such as Kinetics @cite_19 and Moments-in-Time @cite_0 have been created. | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_33",
"@cite_8",
"@cite_28",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_20"
],
"mid": [
"2963524571",
"1983705368",
"2963015194",
"2122476475",
"2016053056",
"2156303437",
"2902904290",
"2962711930",
"2619947201",
"2126574503",
"2904419413",
"2167626157"
],
"abstract": [
"The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.2 on HMDB-51 and 97.9 on UCF-101.",
"Human activity recognition is an important area of computer vision research. Its applications include surveillance systems, patient monitoring systems, and a variety of systems that involve interactions between persons and electronic devices such as human-computer interfaces. Most of these applications require an automated recognition of high-level activities, composed of multiple simple (or atomic) actions of persons. This article provides a detailed overview of various state-of-the-art research papers on human activity recognition. We discuss both the methodologies developed for simple human actions and those for high-level activities. An approach-based taxonomy is chosen that compares the advantages and limitations of each approach. Recognition methodologies for an analysis of the simple actions of a single person are first presented in the article. Space-time volume approaches and sequential approaches that represent and recognize activities directly from input images are discussed. Next, hierarchical recognition methodologies for high-level activities are presented and compared. Statistical approaches, syntactic approaches, and description-based approaches for hierarchical recognition are discussed in the article. In addition, we further discuss the papers on the recognition of human-object interactions and group activities. Public datasets designed for the evaluation of the recognition methodologies are illustrated in our article as well, comparing the methodologies' performances. This review will provide the impetus for future research in more productive areas.",
"",
"",
"Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"In this paper, we present a new method for evolving video CNN models to find architectures that more optimally captures rich spatio-temporal information in videos. Previous work, taking advantage of 3D convolutional layers, obtained promising results by manually designing CNN architectures for videos. We here develop an evolutionary algorithm that automatically explores models with different types and combinations of space-time convolutional layers to jointly capture various spatial and temporal aspects of video representations. We further propose a new key component in video model evolution, the iTGM layer, which more efficiently utilizes its parameters to allow learning of space-time interactions over longer time horizons. The experiments confirm the advantages of our video CNN architecture evolution, with results outperforming previous state-of-the-art models. Our algorithm discovers new and interesting video architecture structures.",
"We present the Moments in Time Dataset, a large-scale human-annotated collection of one million short videos corresponding to dynamic events unfolding within three seconds. Modeling the spatial-audio-temporal dynamics even for actions occurring in 3 second videos poses many challenges: meaningful events do not include only people, but also objects, animals, and natural phenomena; visual and auditory events can be symmetrical or not in time (\"opening\" means \"closing\" in reverse order), and transient or sustained. We describe the annotation process of our dataset (each video is tagged with one action or activity label among 339 different classes), analyze its scale and diversity in comparison to other large-scale video datasets for action recognition, and report results of several baseline models addressing separately, and jointly, three modalities: spatial, temporal and auditory. The Moments in Time dataset, designed to have a large coverage and diversity of events in both visual and auditory modalities, can serve as a new challenge to develop models that scale to the level of complexity and abstract reasoning that a human processes on a daily basis.",
"We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers.",
"Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports.",
"We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report state-of-the-art accuracy on major video recognition benchmarks, Kinetics, Charades and AVA. Code will be made publicly available in PyTorch.",
"This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as hostile interactions like 'punching the observer' or 'throwing objects to the observer', whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multi-channel kernels to integrate global and local motion information, and presents a new activity learning recognition methodology that explicitly considers temporal structures displayed in first-person activity videos. In our experiments, we not only show classification results with segmented videos, but also confirm that our new approach is able to detect activities from continuous videos reliably."
]
} |
1904.08916 | 2939927288 | Injuries are a major cost in sports. Teams spend millions of dollars every year on players who are hurt and unable to play, resulting in lost games, decreased fan interest and additional wages for replacement players. Modern convolutional neural networks have been successfully applied to many video recognition tasks. In this paper, we introduce the problem of injury detection prediction in MLB pitchers and experimentally evaluate the ability of such convolutional models to detect and predict injuries in pitches only from video data. We conduct experiments on a large dataset of TV broadcast MLB videos of 20 different pitchers who were injured during the 2017 season. We experimentally evaluate the model's performance on each individual pitcher, how well it generalizes to new pitchers, how it performs for various injuries, and how early it can predict or detect an injury. | Many works have studied prediction and prevention of injuries in athletes by developing models based on simple data (e.g., physical stats or social environment) @cite_10 @cite_11 or cognitive and psychological factors (stress, life support, identity, etc.) @cite_2 @cite_31 . Others made predictions based on measured strength before a season @cite_17 . Placing sensors on players to monitor their movements has been used to detect pitching events, but not injury detection or prediction @cite_12 @cite_9 . Further, sonography (ultra-sound) of elbows has been used to detect injuries by human experts @cite_7 . | {
"cite_N": [
"@cite_11",
"@cite_7",
"@cite_9",
"@cite_2",
"@cite_31",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2465886640",
"2131434230",
"2524923960",
"2097738768",
"564657457",
"2100256858",
"2105247495",
"1970848617"
],
"abstract": [
"Background Several studies have suggested that psychosocial variables can increase the risk of becoming injured during sport participation.",
"OBJECTIVE. The aim of this study was to determine the usefulness of sonography for detecting elbow injuries among young baseball players.SUBJECTS AND METHODS. One hundred fifty-three volunteers ranging in age from 9 to 12 years and belonging to youth baseball teams participated. Sonography of the elbow was performed in the field when baseball exercises were being conducted. We analyzed the relationship between elbow pain and sonographic abnormalities and the relationship between pitchers and sonographic abnormalities.RESULTS. Sonography showed that 33 subjects had medial epicondylar fragmentation and two had early-stage osteochondritis dissecans of the capitellum. In 25 subjects who agreed to further examination and treatment, radiography confirmed the sonographic findings. All of the 23 subjects with medial epicondylar fragmentation, who stopped throwing, obtained union of the bone and returned to baseball. The two subjects with osteochondritis dissecans of the capitellum underwent surgery before the ost...",
"Purpose:Throwing loads are known to be closely related to injury risk, however for logistic reasons, typically only pitchers have their throws counted, and then only during innings. Accordingly, all other throws made are not counted, and therefore estimates of throws made by players may be inaccurately recorded and under-reported. A potential solution to this is the use of wearable microtechnology to automatically detect, quantify, and report pitch counts in baseball. This study investigated the accuracy of baseball pitching and throwing detection in both practice and competition using a commercially available wearable microtechnology unit. Methods:Seventeen elite youth baseball players (mean ± SD age 16.5 ± 0.8 years; height 184.1 ± 5.5 cm; mass 78.3 ± 7.7 kg) participated in this study. Participants performed pitching, fielding, and throwing events during practice and competition while wearing a microtechnology unit (MinimaxX S4, Catapult Innovations, Melbourne, Australia). Sensitivity and specificity o...",
"Two interrelated studies examined the role psychological factors play in the prediction and prevention of sport related injury. Study 1 involved 470 rugby players who completed measures corresponding to variables in the revised Williams and Andersen (1998) stress and injury model at the beginning of the 2001 playing season. Prospective and objective data were obtained for both the number of injuries and the time missed. Results showed that social support, the type of coping, and previous injury interacted in a conjunctive fashion to maximize the relationship between life stress and injury. Study 2 examined the effectiveness of a cognitive behavioral stress management (CBSM) intervention in reducing injury among athletes from Study 1 who were identified as having an at-risk psychological profile for injury. Forty-eight players were randomly assigned to either a CBSM intervention or a no-contact control condition. Participants completed psychological measures of coping and competitive anxiety at the beginni...",
"Previous research has examined factors that predispose collegiate football players to injury (e.g., Petrie, 1993a, 1993b) as well as factors that influence athletes' psychological adjustment to being injured (e.g., Brewer, 1993; Leddy, Lambert, & Ogles, 1994). Despite the reports of the NCAA Injury Surveillance System that the greatest number of football injuries occur during the spring preseason (NCAA, 1997), studies have only examined injury during the regular season. Thus, the purpose of this study was to investigate the antecedents and consequences of injury in collegiate football players during the spring preseason and across the regular competitive season. Specifically, life stress, social support, competitive trait anxiety, athletic identity, coping style, and preinjury mood state was measured to determine their relationship with the occurrence of injury and with postinjury emotional responses in athletes who sustain an injury at some point during either the spring preseason or regular competitive football season. The overall incidence of athletic injuries was low and the athletes suffered more severe injuries than has been typically found in collegiate football samples. Negative life stress was found to be directly related to the occurrence of injury and to postinjury negative emotional response and was moderated by other psychosocial variables in its influence on the occurrence of injury. Positive life stress was unrelated to injury risk or postinjury emotional response. Social support, sport anxiety, coping, and athletic identity were all found to moderate the negative life stress-injury relationship, as did playing status, suggesting that the complex combinations of these variables increase athletes' susceptibility to the impact of negative life stress. The athletes in this study experienced significant negative emotions following injury. After sustaining injuries they experienced levels of anger, depression, and fatigue that were similar to male psychiatric patients. Injury severity and preinjury mood were found to be the best predictors of postinjury emotional response. Of the psychosocial variables, only social support and sport anxiety were found to be predictive of negative emotional responses following injury. Previously identified relationships between postinjury emotional responses and situational and dispositional variables were replicated and extended.",
"A theoretical model of stress and athletic injury is presented. The purpose of this paper is to propose a framework for the prediction and prevention of stress-related injuries that includes cognitive, physiological, attentional, behavioral, intrapersonal, social, and stress history variables. Development of the model grew from a synthesis of the stress-illness, stress-accident, and stress-injury literatures. The model and its resulting hypotheses offer a framework for many avenues of research into the nature of injury and reduction of injury risk. Other advantages of the model are that it addresses possible mechanisms behind the stress-injury relationship and suggests several specific interventions that may help diminish the likelihood of injury. The model also has the potential of being applied to the investigation of injury and accident occurrence in general.",
"This paper introduces a compact, wireless, wearable system that measures signals indicative of forces, torques and other descriptive and evaluative features that the human body undergoes during bursts of extreme physical activity (such as during athletic performance). Standard approaches leverage high-speed camera systems, which need significant infrastructure and provide limited update rates and dynamic accuracy. This project uses 6 degree-of freedom inertial measurement units worn on various segments of an athlete’s body to directly make these dynamic measurements. A combination of low and high range sensors enables sensitivity for both slow and fast motion, and the addition of a compass helps in tracking joint angles. Data from the battery-powered nodes is acquired using a custom wireless protocol over an RF link and analyzed offline. Several professional pitchers and batters were instrumented with the system and data was gathered over many pitches and swings. We show some biomechanically descriptive parameters extracted from this data, and highlight ongoing work and system improvements",
"Background:Collegiate football is a high-demand sport in which shoulder injuries are common. Research has described the incidence of these injuries, with little focus on causative factors or injury prevention.Hypothesis:Football athletes who score lower on preseason strength and functional testing are more likely to sustain an in-season shoulder injury.Study Design:Prospective, cohort study.Level of Evidence:Level 2.Methods:Twenty-six collegiate football players underwent preseason testing with a rotational profile for shoulder range of motion, isometric strength of the rotator cuff at 90° elevation and external rotation in the 90 90 position, fatigue testing (prone-Y, scaption, and standing cable press), and the Closed Kinetic Chain Upper Extremity Stability Test (CKCUEST). Data collected postseason included the type of shoulder injury and the side injured. Logistic regression was used to determine if the testing measures predicted injury, and a receiver operating characteristic curve was constructed to ..."
]
} |
1904.08910 | 2939991587 | Watching cartoons can be useful for children's intellectual, social and emotional development. However, the most popular video sharing platform today provides many videos with Elsagate content. Elsagate is a phenomenon that depicts childhood characters in disturbing circumstances (e.g., gore, toilet humor, drinking urine, stealing). Even with this threat easily available for children, there is no work in the literature addressing the problem. As the first to explore disturbing content in cartoons, we proceed from the most recent pornography detection literature applying deep convolutional neural networks combined with static and motion information of the video. Our solution is compatible with mobile platforms and achieved 92.6 of accuracy. Our goal is not only to introduce the first solution but also to bring up the discussion around Elsagate. | Alghowinem @cite_9 proposed a multimodal approach to detect inappropriate content in videos from YouTube Kids. For that, one-second slices are extracted for analysis and classification. Image frames, audio signal, transcribed text and their respective features (e.g., temporal robust features (TRoF), mel-frequency cepstral coefficient (MFCC), bag-of-words) are extracted from each slice. These features are then fed into individual classifiers, which are combined using a threshold-based decision strategy. According to Alghowinem, the paper acted as a proof of concept. But, the pilot experiment is performed on three videos which are not even cartoons. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2900026217"
],
"abstract": [
"Acknowledging the advantages as well as the dangers of the internet content on kids education and entertainment, YouTube Kids was created. Based on regulations for child-friendly programs, several violations are identified and restricted from viewable content. When a child surfs the Internet, the same regulations could be automatically detected and filtered. However, current YouTube Kids content filtering relies on meta-data attributes, where inappropriate content could pass the filtering mechanism. This research, propose an advanced real-time content filtering approach using automated video and audio analysis as an extra layer for kids safety. The proposed method utilizes the thin-slicing theory, where several one second slices are selected randomly from the clip and extracted. The use of a one-second slice will assure a temporal analysis of the clip content, and ensures a real-time content analysis. For each slice, audio is automatically transcribed using automatic speech recognition techniques to be further analysed for its linguistic content. Furthermore, the audio signal is analysed to detect event and scenes (e.g. explosion). The image frames extracted from the slices are also inspected for its content to avoid inappropriate scenes, such as violence. Upon the success of this approach on YouTube Kids application, investigation of its generalizability to other video applications, and other languages could be performed."
]
} |
1904.08910 | 2939991587 | Watching cartoons can be useful for children's intellectual, social and emotional development. However, the most popular video sharing platform today provides many videos with Elsagate content. Elsagate is a phenomenon that depicts childhood characters in disturbing circumstances (e.g., gore, toilet humor, drinking urine, stealing). Even with this threat easily available for children, there is no work in the literature addressing the problem. As the first to explore disturbing content in cartoons, we proceed from the most recent pornography detection literature applying deep convolutional neural networks combined with static and motion information of the video. Our solution is compatible with mobile platforms and achieved 92.6 of accuracy. Our goal is not only to introduce the first solution but also to bring up the discussion around Elsagate. | @cite_26 also explored violence detection in cartoons. They proposed a three-layered video classification framework: keyframe extraction, feature extraction using scale-invariant feature transform (SIFT), feature encoding using Fisher vector image representation and classification using spectral regression kernel discriminant analysis (SRKDA). They evaluated their approach on 100 videos, collected from various sources. The dataset (not publicly available) comprises nearly 2 hours (7,100 seconds) of 52 violent and 48 non-violent videos. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2908327111"
],
"abstract": [
"Children are the most vulnerable to ideas presented in Cartoon videos and TV. Cartoons have become one of the most important source of entertainment, but it also introduce a lot of ideas that are not suitable for them. Violence is one of the unwanted feature that is prevalent in cartoons to put element of fantasy and enchantment. In order to stop children from viewing violent intense cartoons, the best strategy is to make them inaccessible. Therefore, some sort of filters should be placed at certain hubs to perform this task. The challenge is that how a filter will know that a particular cartoon video has violent content in it. The meta-data telling the world about the video does not inform that the video consists of violent material. Certain frames snapshots images of video, if analyzed using image processing techniques, can help in concluding that a particular video has intense material in it. The aim of this work is to classify social media videos especially related to animated cartoons with violent nonviolent behaviors. It addresses the problem of content based image matching algorithms based on key point descriptors. The basic goal is to extract general information from an image without any specific query. First SIFT-descriptors are extracted from a large set of images. This set of descriptors are then defined as a means of providing fast and accurate comparisons between images and distinguish between violent and nonviolent images in combination with Machine Learning algorithms. The results are then compared for each classifier with varying parameters."
]
} |
1904.08910 | 2939991587 | Watching cartoons can be useful for children's intellectual, social and emotional development. However, the most popular video sharing platform today provides many videos with Elsagate content. Elsagate is a phenomenon that depicts childhood characters in disturbing circumstances (e.g., gore, toilet humor, drinking urine, stealing). Even with this threat easily available for children, there is no work in the literature addressing the problem. As the first to explore disturbing content in cartoons, we proceed from the most recent pornography detection literature applying deep convolutional neural networks combined with static and motion information of the video. Our solution is compatible with mobile platforms and achieved 92.6 of accuracy. Our goal is not only to introduce the first solution but also to bring up the discussion around Elsagate. | @cite_13 studied the Elsagate phenomenon This paper was not available before our submission. using the video's titles, tags, thumbnails and general statistics (e.g., views, likes, dislikes). They proposed to process each type of feature using a different technique and to apply a fully-connected layer to combine their outputs. Despite the 82.8 In the face of the related works, it is clear that there is a lack of research specifically for the Elsagate problem. As a matter of fact, we have in the literature plenty of solutions for sensitive content analysis but these works are focused on real-life videos with humans, regardless of the type of sensitive content (e.g., nudity @cite_18 @cite_19 , pornography @cite_27 @cite_15 @cite_21 , child pornography @cite_25 , violence @cite_1 @cite_22 @cite_6 ). | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_13",
"@cite_25"
],
"mid": [
"",
"2612693684",
"2811288179",
"2295144887",
"2886554762",
"2141423371",
"2560838595",
"2364255476",
"2914279369",
"2780678160"
],
"abstract": [
"",
"Automatically detecting violence in videos is paramount for enforcing the law and providing the society with better policies for safer public places. In addition, it may be essential for protecting minors from accessing inappropriate contents on-line, and for helping parents choose suitable movie titles for their children. However, this is an open problem as the very definition of violence is subjective and may vary from one society to another. Detecting such nuances from video footages with no human supervision is very challenging. Clearly, when designing a computer-aided solution to this problem, we need to think of efficient (quickly harness large troves of data) and effective detection methods (robustly filter what needs special attention and further analysis). In this vein, we explore a content description method for violence detection founded upon temporal robust features that quickly grasp video sequences, automatically classifying violent videos. The used method also holds promise for fast and effective classification of other recognition tasks (e.g., pornography and other inappropriate material). When compared to more complex counterparts for violence detection, the method shows similar classification quality while being several times more efficient in terms of runtime and memory footprint.",
"Abstract The very idea of hiring humans to avoid the indiscriminate spread of inappropriate sensitive content online (e.g., child pornography and violence) is daunting. The inherent data deluge and the tediousness of the task call for more adequate approaches, and set the stage for computer-aided methods. If running in the background, such methods could readily cut the stream flow at the very moment of inadequate content exhibition, being invaluable for protecting unwary spectators. Except for the particular case of violence detection, related work to sensitive video analysis has mostly focused on deciding whether or not a given stream is sensitive, leaving the localization task largely untapped. Identifying when a stream starts and ceases to display inappropriate content is key for live streams and video on demand. In this work, we propose a novel multimodal fusion approach to sensitive scene localization. The solution can be applied to diverse types of sensitive content, without the need for step modifications (general purpose). We leverage the multimodality data nature of videos (e.g., still frames, video space-time, audio stream, etc.) to effectively single out frames of interest. To validate the solution, we perform localization experiments on pornographic and violent video streams, two of the commonest types of sensitive content, and report quantitative and qualitative results. The results show, for instance, that the proposed method only misses about five minutes in every hour of streamed pornographic content. Finally, for the particular task of pornography localization, we also introduce the first frame-level annotated pornographic video dataset to date, which comprises 140 h of video, freely available for downloading.",
"This paper presents the RECOD approaches used in the MediaEval 2014 Violent Scenes Detection task. Our system is based on the combination of visual, audio, and text features. We also evaluate the performance of a convolutional network as a feature extractor. We combined those features using a fusion scheme. We participated in the main and the generalization tasks.",
"Detecting violence in videos through automatic means is significant for law enforcement and analysis of surveillance cameras with the intent of maintaining public safety. Moreover, it may be a great tool for protecting children from accessing inappropriate content and help parents make a better informed decision about what their kids should watch. However, this is a challenging problem since the very definition of violence is broad and highly subjective. Hence, detecting such nuances from videos with no human supervision is not only technical, but also a conceptual problem. With this in mind, we explore how to better describe the idea of violence for a convolutional neural network by breaking it into more objective and concrete parts. Initially, our method uses independent networks to learn features for more specific concepts related to violence, such as fights, explosions, blood, etc. Then we use these features to classify each concept and later fuse them in a meta-classification to describe violence. We also explore how to represent time-based events in still-images as network inputs; since many violent acts are described in terms of movement. We show that using more specific concepts is an intuitive and effective solution, besides being complementary to form a more robust definition of violence. When compared to other methods for violence detection, this approach holds better classification quality while using only automatic features.",
"The ability to filter improper content from multimedia sources based on visual content has important applications, since text-based filters are clearly insufficient against erroneous and or malicious associations between text and actual content. In this paper, we investigate a method for detection of nudity in videos based on a bag-of-visual-features representation for frames and an associated voting scheme.Bag-of-Visual-Features (BoVF) approaches have been successfully applied to object recognition and scene classification, showing robustness to occlusion and also to the several kinds of variations that normally curse object detection methods. To the best of our knowledge, only two proposals in the literature use BoVF for nude detection in still images, and no other attempt has been made at applying BoVF for videos. Nevertheless, the results of our experiments show that this approach is indeed able to provide good recognition rates for nudity even at the frame level and with a relatively low sampling ratio. Also, the proposed voting scheme significantly enhances the recognition rates for video segments, achieving, in the best case, a value of 93.2 of correct classification, using a sampling ratio of 1 15 frames. Finally, a visual analysis of some particular cases indicates possible sources of misclassifications.",
"Recent literature has explored automated pornographic detection - a bold move to replace humans in the tedious task of moderating online content. Unfortunately, on scenes with high skin exposure, such as people sunbathing and wrestling, the state of the art can have many false alarms. This paper is based on the premise that incorporating motion information in the models can alleviate the problem of mapping skin exposure to pornographic content, and advances the bar on automated pornography detection with the use of motion information and deep learning architectures. Deep Learning, especially in the form of Convolutional Neural Networks, have striking results on computer vision, but their potential for pornography detection is yet to be fully explored through the use of motion information. We propose novel ways for combining static (picture) and dynamic (motion) information using optical flow and MPEG motion vectors. We show that both methods provide equivalent accuracies, but that MPEG motion vectors allow a more efficient implementation. The best proposed method yields a classification accuracy of 97.9 - an error reduction of 64.4 when compared to the state of the art - on a dataset of 800 challenging test cases. Finally, we present and discuss results on a larger, and more challenging, dataset.",
"Abstract With the growing amount of inappropriate content on the Internet, such as pornography, arises the need to detect and filter such material. The reason for this is given by the fact that such content is often prohibited in certain environments (e.g., schools and workplaces) or for certain publics (e.g., children). In recent years, many works have been mainly focused on detecting pornographic images and videos based on visual content, particularly on the detection of skin color. Although these approaches provide good results, they generally have the disadvantage of a high false positive rate since not all images with large areas of skin exposure are necessarily pornographic images, such as people wearing swimsuits or images related to sports. Local feature based approaches with Bag-of-Words models (BoW) have been successfully applied to visual recognition tasks in the context of pornography detection. Even though existing methods provide promising results, they use local feature descriptors that require a high computational processing time yielding high-dimensional vectors. In this work, we propose an approach for pornography detection based on local binary feature extraction and BossaNova image representation, a BoW model extension that preserves more richly the visual information. Moreover, we propose two approaches for video description based on the combination of mid-level representations namely BossaNova Video Descriptor (BNVD) and BoW Video Descriptor (BoW-VD). The proposed techniques are promising, achieving an accuracy of 92.40 , thus reducing the classification error by 16 over the current state-of-the-art local features approach on the Pornography dataset.",
"A considerable number of the most-subscribed YouTube channels feature content popular among children of very young age. Hundreds of toddler-oriented channels on YouTube offer inoffensive, well produced, and educational videos. Unfortunately, inappropriate (disturbing) content that targets this demographic is also common. YouTube's algorithmic recommendation system regrettably suggests inappropriate content because some of it mimics or is derived from otherwise appropriate content. Considering the risk for early childhood development, and an increasing trend in toddler's consumption of YouTube media, this is a worrying problem. While there are many anecdotal reports of the scale of the problem, there is no systematic quantitative measurement. Hence, in this work, we develop a classifier able to detect toddler-oriented inappropriate content on YouTube with 82.8 accuracy, and we leverage it to perform a first-of-its-kind, large-scale, quantitative characterization that reveals some of the risks of YouTube media consumption by young children. Our analysis indicates that YouTube's currently deployed counter-measures are ineffective in terms of detecting disturbing videos in a timely manner. Finally, using our classifier, we assess how prominent the problem is on YouTube, finding that young children are likely to encounter disturbing videos when they randomly browse the platform starting from benign videos.",
"Abstract Over the past two decades, the nature of child pornography in terms of generation, distribution and possession of images drastically changed, evolving from basically covert and offline exchanges of content to a massive network of contacts and data sharing. Nowadays, the internet has become not only a transmission channel but, probably, a child pornography enabling factor by itself. As a consequence, most countries worldwide consider a crime to take, or permit to be taken, to store or to distribute images or videos depicting any child pornography grammar. But before action can even be taken, we must detect the very existence or presence of sexually exploitative imagery of children when gleaning over vast troves of data. With this backdrop, veering away from virtually all off-the-shelf solutions and existing methods in the literature, in this work, we leverage cutting-edge data-driven concepts and deep convolutional neural networks (CNNs) to harness enough characterization aspects from a wide range of images and point out the presence of child pornography content in an image. We explore different transfer-learning strategies for CNN modeling. CNNs are first trained with problems for which we can gather more training examples and upon which there are no serious concerns regarding collection and storage and then fine-tuned with data from the target problem of interest. The learned networks outperform different existing solutions and seem to represent an important step forward when dealing with child pornography content detection. The proposed solutions are encapsulated in a sandbox virtual machine ready for deployment by experts and practitioners. Experimental results with tens of thousands of real cases show the effectiveness of the proposed methods."
]
} |
1904.08643 | 2936646008 | Style transfer is a problem of rendering a content image in the style of another style image. A natural and common practical task in applications of style transfer is to adjust the strength of stylization. Algorithm of (2016) provides this ability by changing the weighting factors of content and style losses but is computationally inefficient. Real-time style transfer introduced by (2016) enables fast stylization of any image by passing it through a pre-trained transformer network. Although fast, this architecture is not able to continuously adjust style strength. We propose an extension to real-time style transfer that allows direct control of style strength at inference, still requiring only a single transformer network. We conduct qualitative and quantitative experiments that demonstrate that the proposed method is capable of smooth stylization strength control and removes certain stylization artifacts appearing in the original real-time style transfer method. Comparisons with alternative real-time style transfer algorithms, capable of adjusting stylization strength, show that our method reproduces style with more details. | The task of rendering image in given style, also known as style transfer and non-photorealistic rendering, is a long studied problem in computer vision. Earlier approaches @cite_11 @cite_18 @cite_12 mainly targeted reproduction of specific styles (such as pencil drawings or oil paintings) and used hand-crafted features for that. Later work of Gatys et. al @cite_13 proposed a style transfer algorithm based on deep convolutional neural network VGG @cite_3 . This algorithm was not tied to specific style. Instead the style was specified by a separate style image. Key discovery of was to find representation for content and style based on activations inside the convolutional neural network. Thus content image could produce target content representation and style image could produce target style representation and for any image we could measure its deviation in style and content, captured by content and style losses. Proposed approach was to find an image giving minimal weighted sum of the content and style loss. Stylization strength was possible by adjusting the weighting factor besides the style loss. | {
"cite_N": [
"@cite_18",
"@cite_3",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"1594772103",
"1686810756",
"2475287302",
"2288799730",
""
],
"abstract": [
"In the past decade, the field of non-photorealistic computer graphics (NPR) has developed as the product of research marked by diverse and sometimes divergent assumptions, approaches, and aims. This book is the first to offer a systematic assessment of this work, identifying and exploring the underlying principles that have given the field its cohesion. In the course of this assessment, the authors provide detailed accounts of today's major non-photorealistic algorithms, along with the background information and implementation advice you need to put them to productive use. As NPR finds new applications in a broadening array of fields, Non-Photorealistic Computer Graphics is destined to be the standard reference for researchers and practitioners alike.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.",
"Non-photorealistic rendering (NPR) is a combination of computer graphics and computer vision that produces renderings in various artistic, expressive or stylized ways such as painting and drawing. This book focuses on image and video based NPR, where the input is a 2D photograph or a video rather than a 3D model. 2D NPR techniques have application in areas as diverse as consumer and professional digital photography and visual effects for TV and film production. The book covers the full range of the state of the art of NPR with every chapter authored by internationally renowned experts in the field, covering both classical and contemporary techniques. It will enable both graduate students in computer graphics, computer vision or image processing and professional developers alike to quickly become familiar with contemporary techniques, enabling them to apply 2D NPR algorithms in their own projects.",
""
]
} |
1904.08643 | 2936646008 | Style transfer is a problem of rendering a content image in the style of another style image. A natural and common practical task in applications of style transfer is to adjust the strength of stylization. Algorithm of (2016) provides this ability by changing the weighting factors of content and style losses but is computationally inefficient. Real-time style transfer introduced by (2016) enables fast stylization of any image by passing it through a pre-trained transformer network. Although fast, this architecture is not able to continuously adjust style strength. We propose an extension to real-time style transfer that allows direct control of style strength at inference, still requiring only a single transformer network. We conduct qualitative and quantitative experiments that demonstrate that the proposed method is capable of smooth stylization strength control and removes certain stylization artifacts appearing in the original real-time style transfer method. Comparisons with alternative real-time style transfer algorithms, capable of adjusting stylization strength, show that our method reproduces style with more details. | However, algorithm of required computationally expensive optimization taking several minutes even on modern GPUs. To overcome this issue @cite_2 and @cite_7 proposed to train a transformer network for fast stylization. The content image there was simply passed through the transformer network for stylization. The network was trained using a weighted sum of content and style loss of Thus it was tied to specific style and stylization strength fixed in the loss function and modification of stylization strength at inference time was not possible. | {
"cite_N": [
"@cite_7",
"@cite_2"
],
"mid": [
"2331128040",
"2295130376"
],
"abstract": [
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.",
"recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to , but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.