aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1703.10106
2604906478
We address human action recognition from multi-modal video data involving articulated pose and RGB frames and propose a two-stream approach. The pose stream is processed with a convolutional model taking as input a 3D tensor holding data from a sub-sequence. A specific joint ordering, which respects the topology of the human body, ensures that different convolutional layers correspond to meaningful levels of abstraction. The raw RGB stream is handled by a spatio-temporal soft-attention mechanism conditioned on features from the pose network. An LSTM network receives input from a set of image locations at each instant. A trainable glimpse sensor extracts features on a set of predefined locations specified by the pose stream, namely the 4 hands of the two people involved in the activity. Appearance features give important cues on hand motion and on objects held in each hand. We show that it is of high interest to shift the attention to different hands at different time steps depending on the activity itself. Finally a temporal attention mechanism learns how to fuse LSTM features over time. We evaluate the method on 3 datasets. State-of-the-art results are achieved on the largest dataset for human activity recognition, namely NTU-RGB+D, as well as on the SBU Kinect Interaction dataset. Performance close to state-of-the-art is achieved on the smaller MSR Daily Activity 3D dataset.
On the other hand, takes the entire input into account, weighting each part of the observations dynamically. The objective function is usually differentiable, making gradient-based optimization possible. Soft attention was used for various applications such as neural machine translation @cite_29 @cite_3 or image captioning @cite_31 . Recently, soft attention was proposed for image @cite_45 and video understanding @cite_38 @cite_11 @cite_24 , with spatial, temporal and spatio-temporal variants. Sharma @cite_38 proposed a recurrent mechanism for action recognition from RGB data, which integrates convolutional features from different parts of a space-time volume. report a temporal recurrent attention model for dense labelling of videos @cite_24 . At each time step, multiple input frames are integrated and soft predictions are generated for multiple frames. Bazzani @cite_10 learn spatial saliency maps represented by mixtures of Gaussians, whose parameters are included into the internal state of a LSTM network. Saliency maps are then used to smoothly select areas with relevant human motion. Song @cite_11 propose separate spatial and temporal attention networks for action recognition from pose. At each frame, the spatial attention model gives more importance to the joints most relevant to the current action, whereas the temporal model selects frames.
{ "cite_N": [ "@cite_38", "@cite_29", "@cite_3", "@cite_24", "@cite_45", "@cite_31", "@cite_10", "@cite_11" ], "mid": [ "2963750390", "", "", "2952835694", "1923211482", "2950178297", "2313180542", "2950568498" ], "abstract": [ "MOTIVATION Attention based models have been shown to achieve promising results on several challenging tasks, including caption generation [9], machine translation [1], game-playing and tracking [4]. Attention based models can potentially infer the action happening in videos by focusing only on the relevant places in each video frame. Soft-attention models are deterministic and can be trained using backpropagation. We propose a soft-attention based model for action recognition in videos. We use multi-layered Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM). Our model tends to recognize important elements in video frames based on the activities it detects.", "", "", "Every moment counts in action recognition. A comprehensive understanding of human activity in video requires labeling every frame according to the actions occurring, placing multiple labels densely over a video sequence. To study this problem we extend the existing THUMOS dataset and introduce MultiTHUMOS, a new dataset of dense labels over unconstrained internet videos. Modeling multiple, dense labels benefits from temporal relations within and across classes. We define a novel variant of long short-term memory (LSTM) deep networks for modeling these temporal relations via multiple input and output connections. We show that this model improves action labeling accuracy and further enables deeper understanding tasks ranging from structured retrieval to action prediction.", "Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. In this paper we focus on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition . All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks , along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "In many computer vision tasks, the relevant information to solve the problem at hand is mixed to irrelevant, distracting information. This has motivated researchers to design attentional models that can dynamically focus on parts of images or videos that are salient, e.g., by down-weighting irrelevant pixels. In this work, we propose a spatiotemporal attentional model that learns where to look in a video directly from human fixation data. We model visual attention with a mixture of Gaussians at each frame. This distribution is used to express the probability of saliency for each pixel. Time consistency in videos is modeled hierarchically by: 1) deep 3D convolutional features to represent spatial and short-term time relations and 2) a long short-term memory network on top that aggregates the clip-level representation of sequential clips and therefore expands the temporal domain from few frames to seconds. The parameters of the proposed model are optimized via maximum likelihood estimation using human fixations as training data, without knowledge of the action in each video. Our experiments on Hollywood2 show state-of-the-art performance on saliency prediction for video. We also show that our attentional model trained on Hollywood2 generalizes well to UCF101 and it can be leveraged to improve action classification accuracy on both datasets.", "Human action recognition is an important task in computer vision. Extracting discriminative spatial and temporal features to model the spatial and temporal evolutions of different actions plays a key role in accomplishing this task. In this work, we propose an end-to-end spatial and temporal attention model for human action recognition from skeleton data. We build our model on top of the Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM), which learns to selectively focus on discriminative joints of skeleton within each frame of the inputs and pays different levels of attention to the outputs of different frames. Furthermore, to ensure effective training of the network, we propose a regularized cross-entropy loss to drive the model learning process and develop a joint training strategy accordingly. Experimental results demonstrate the effectiveness of the proposed model,both on the small human action recognition data set of SBU and the currently largest NTU dataset." ] }
1703.09938
2604352584
Analyzing multivariate time series data is important for many applications such as automated control, fault diagnosis and anomaly detection. One of the key challenges is to learn latent features automatically from dynamically changing multivariate input. In visual recognition tasks, convolutional neural networks (CNNs) have been successful to learn generalized feature extractors with shared parameters over the spatial domain. However, when high-dimensional multivariate time series is given, designing an appropriate CNN model structure becomes challenging because the kernels may need to be extended through the full dimension of the input volume. To address this issue, we present two structure learning algorithms for deep CNN models. Our algorithms exploit the covariance structure over multiple time series to partition input volume into groups. The first algorithm learns the group CNN structures explicitly by clustering individual input sequences. The second algorithm learns the group CNN structures implicitly from the error backpropagation. In experiments with two real-world datasets, we demonstrate that our group CNNs outperform existing CNN based regression methods.
Recurrent Convolutional Neural Network (RCNN), which can be considered as a variant of a CNN, is recently proposed and shows state-of-the-art performance on classifying multiple time series @cite_29 @cite_24 @cite_1 . When a small number of time series is given, multiple signals can be handled individually in a straightforward manner by using polling operators or fully connected linear operators on signals. However, it is not clear how to model the covariance structure of large number of multiple sequences explicitly for deep neural network models.
{ "cite_N": [ "@cite_24", "@cite_29", "@cite_1" ], "mid": [ "1934184906", "2951277909", "2136655611" ], "abstract": [ "In recent years, the convolutional neural network (CNN) has achieved great success in many computer vision tasks. Partially inspired by neuroscience, CNN shares many properties with the visual system of the brain. A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer. Though the input is static, the activities of RCNN units evolve over time so that the activity of each unit is modulated by the activities of its neighboring units. This property enhances the ability of the model to integrate the context information, which is important for object recognition. Like other recurrent neural networks, unfolding the RCNN through time can result in an arbitrarily deep network with a fixed number of parameters. Furthermore, the unfolded network has multiple paths, which can facilitate the learning process. The model is tested on four benchmark object recognition datasets: CIFAR-10, CIFAR-100, MNIST and SVHN. With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets. Increasing the number of parameters leads to even better performance. These results demonstrate the advantage of the recurrent structure over purely feed-forward structure for object recognition.", "Scene parsing is a technique that consist on giving a label to all pixels in an image according to the class they belong to. To ensure a good visual coherence and a high class accuracy, it is essential for a scene parser to capture image long range dependencies. In a feed-forward architecture, this can be simply achieved by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach consisting of a recurrent convolutional neural network which allows us to consider a large input context, while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation methods, nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.", "We present a novel convolutional auto-encoder (CAE) for unsupervised feature learning. A stack of CAEs forms a convolutional neural network (CNN). Each CAE is trained using conventional on-line gradient descent without additional regularization terms. A max-pooling layer is essential to learn biologically plausible features consistent with those found by previous approaches. Initializing a CNN with filters of a trained CAE stack yields superior performance on a digit (MNIST) and an object recognition (CIFAR10) benchmark." ] }
1703.09964
2951811868
We propose to leverage denoising autoencoder networks as priors to address image restoration problems. We build on the key observation that the output of an optimal denoising autoencoder is a local mean of the true data density, and the autoencoder error (the difference between the output and input of the trained autoencoder) is a mean shift vector. We use the magnitude of this mean shift vector, that is, the distance to the local mean, as the negative log likelihood of our natural image prior. For image restoration, we maximize the likelihood using gradient descent by backpropagating the autoencoder error. A key advantage of our approach is that we do not need to train separate networks for different image restoration tasks, such as non-blind deconvolution with different kernels, or super-resolution at different magnification factors. We demonstrate state of the art results for non-blind deconvolution and super-resolution using the same autoencoding prior.
Solving image restoration problems using neural networks seems attractive because they allow for straightforward end-to-end learning. This has led to remarkable success for example for single image super-resolution @cite_37 @cite_10 @cite_3 @cite_18 @cite_5 and denoising @cite_29 @cite_16 . A disadvantage of the end-to-end learning is that, in principle, it requires training a different network for each restoration task (e.g., each different noise level or magnification factor). While a single network can be effective for denoising different noise levels @cite_16 , and similarly a single network can perform well for different super-resolution factors @cite_5 , it seems unlikely that in non-blind deblurring, the same network would work well for arbitrary blur kernels. Additionally, experiments by @cite_9 show that training a network for multiple tasks reduces performance compared to training each task on a separate network. Previous research addressing non-blind deconvolution using deep networks includes the work by @cite_31 and more recently @cite_28 , but they require end-to-end training for each blur kernel.
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_28", "@cite_29", "@cite_9", "@cite_3", "@cite_5", "@cite_31", "@cite_16", "@cite_10" ], "mid": [ "2345557152", "54257720", "2124964692", "2037642501", "2508457857", "1885185971", "", "1973567017", "2520164769", "2202656999" ], "abstract": [ "Single image super-resolution (SR) is an ill-posed problem, which tries to recover a high-resolution image from its low-resolution observation. To regularize the solution of the problem, previous methods have focused on designing good priors for natural images, such as sparse representation, or directly learning the priors from a large data set with models, such as deep neural networks. In this paper, we argue that domain expertise from the conventional sparse coding model can be combined with the key ingredients of deep learning to achieve further improved results. We demonstrate that a sparse coding model particularly designed for SR can be incarnated as a neural network with the merit of end-to-end optimization over training data. The network has a cascaded structure, which boosts the SR performance for both fixed and incremental scaling factors. The proposed training and testing schemes can be extended for robust handling of images with additional degradation, such as noise and blurring. A subjective assessment is conducted and analyzed in order to thoroughly evaluate various SR techniques. Our proposed model is tested on a wide range of images, and it significantly outperforms the existing state-of-the-art methods for various scaling factors both quantitatively and perceptually.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.", "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.", "Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with a plain multi layer perceptron (MLP) applied to image patches. While this has been done before, we will show that by training on large image databases we are able to compete with the current state-of-the-art image denoising methods. Furthermore, our approach is easily adapted to less extensively studied types of noise (by merely exchanging the training data), for which we achieve excellent results as well.", "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.", "", "Image deconvolution is the ill-posed problem of recovering a sharp image, given a blurry one generated by a convolution. In this work, we deal with space-invariant non-blind deconvolution. Currently, the most successful methods involve a regularized inversion of the blur in Fourier domain as a first step. This step amplifies and colors the noise, and corrupts the image information. In a second (and arguably more difficult) step, one then needs to remove the colored noise, typically using a cleverly engineered algorithm. However, the methods based on this two-step approach do not properly address the fact that the image information has been corrupted. In this work, we also rely on a two-step procedure, but learn the second step on a large dataset of natural images, using a neural network. We will show that this approach outperforms the current state-of-the-art on a large dataset of artificially blurred images. We demonstrate the practical applicability of our method in a real-world example with photographic out-of-focus blur.", "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.", "Most of the previous sparse coding (SC) based super resolution (SR) methods partition the image into overlapped patches, and process each patch separately. These methods, however, ignore the consistency of pixels in overlapped patches, which is a strong constraint for image reconstruction. In this paper, we propose a convolutional sparse coding (CSC) based SR (CSC-SR) method to address the consistency issue. Our CSC-SR involves three groups of parameters to be learned: (i) a set of filters to decompose the low resolution (LR) image into LR sparse feature maps, (ii) a mapping function to predict the high resolution (HR) feature maps from the LR ones, and (iii) a set of filters to reconstruct the HR images from the predicted HR feature maps via simple convolution operations. By working directly on the whole image, the proposed CSC-SR algorithm does not need to divide the image into overlapped patches, and can exploit the image global correlation to produce more robust reconstruction of image local structures. Experimental results clearly validate the advantages of CSC over patch based SC in SR application. Compared with state-of-the-art SR methods, the proposed CSC-SR method achieves highly competitive PSNR results, while demonstrating better edge and texture preservation performance." ] }
1703.09964
2951811868
We propose to leverage denoising autoencoder networks as priors to address image restoration problems. We build on the key observation that the output of an optimal denoising autoencoder is a local mean of the true data density, and the autoencoder error (the difference between the output and input of the trained autoencoder) is a mean shift vector. We use the magnitude of this mean shift vector, that is, the distance to the local mean, as the negative log likelihood of our natural image prior. For image restoration, we maximize the likelihood using gradient descent by backpropagating the autoencoder error. A key advantage of our approach is that we do not need to train separate networks for different image restoration tasks, such as non-blind deconvolution with different kernels, or super-resolution at different magnification factors. We demonstrate state of the art results for non-blind deconvolution and super-resolution using the same autoencoding prior.
A key idea of our work is to train a neural autoencoder that we use as a prior for image restoration. Autoencoders are typically used for unsupervised representation learning @cite_23 . The focus of these techniques lies on the descriptive strength of the learned representation, which can be used to address classification problems for example. In addition, generative models such as generative adversarial networks @cite_30 or variational autoencoders @cite_0 also facilitate sampling the representation to generate new data. Their network architectures usually consist of an encoder followed by a decoder, with a bottleneck that is interpreted as the data representation in the middle. The ability of autoencoders and generative models to create images from abstract representations makes them attractive for restoration problems. Notably, the encoder-decoder architecture in 's image restoration work @cite_16 is highly reminiscent of autoencoder architectures, although they train their network in a supervised manner.
{ "cite_N": [ "@cite_30", "@cite_16", "@cite_0", "@cite_23" ], "mid": [ "2099471712", "2520164769", "", "2145094598" ], "abstract": [ "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.", "", "We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations." ] }
1703.09964
2951811868
We propose to leverage denoising autoencoder networks as priors to address image restoration problems. We build on the key observation that the output of an optimal denoising autoencoder is a local mean of the true data density, and the autoencoder error (the difference between the output and input of the trained autoencoder) is a mean shift vector. We use the magnitude of this mean shift vector, that is, the distance to the local mean, as the negative log likelihood of our natural image prior. For image restoration, we maximize the likelihood using gradient descent by backpropagating the autoencoder error. A key advantage of our approach is that we do not need to train separate networks for different image restoration tasks, such as non-blind deconvolution with different kernels, or super-resolution at different magnification factors. We demonstrate state of the art results for non-blind deconvolution and super-resolution using the same autoencoding prior.
A denoising autoencoder @cite_13 is an autoencoder trained to reconstruct data that was corrupted with noise. Previously, Alain and Bengio @cite_2 and @cite_42 used DAEs to construct generative models. We are inspired by the insight of Alain and Bengio that the output of an optimal DAE is a local mean of the true data density. Hence, the autoencoder error (the difference between its output and input) is a mean shift vector @cite_25 . This motivates using the magnitude of the autoencoder error as our prior.
{ "cite_N": [ "@cite_42", "@cite_13", "@cite_25", "@cite_2" ], "mid": [ "2951140085", "2025768430", "2067191022", "2614634292" ], "abstract": [ "Generating high-resolution, photo-realistic images has been a long-standing goal in machine learning. Recently, (2016) showed one interesting way to synthesize novel images by performing gradient ascent in the latent space of a generator network to maximize the activations of one or multiple neurons in a separate classifier network. In this paper we extend this method by introducing an additional prior on the latent code, improving both sample quality and sample diversity, leading to a state-of-the-art generative model that produces high quality images at higher resolutions (227x227) than previous generative models, and does so for all 1000 ImageNet categories. In addition, we provide a unified probabilistic interpretation of related activation maximization methods and call the general class of models \"Plug and Play Generative Networks\". PPGNs are composed of 1) a generator network G that is capable of drawing a wide range of image types and 2) a replaceable \"condition\" network C that tells the generator what to draw. We demonstrate the generation of images conditioned on a class (when C is an ImageNet or MIT Places classification network) and also conditioned on a caption (when C is an image captioning network). Our method also improves the state of the art of Multifaceted Feature Visualization, which generates the set of synthetic inputs that activate a neuron in order to better understand how deep neural networks operate. Finally, we show that our model performs reasonably well at the task of image inpainting. While image models are used in this paper, the approach is modality-agnostic and can be applied to many types of data.", "Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.", "A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.", "What do auto-encoders learn about the underlying data-generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data-generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input). It contradicts previous interpretations of reconstruction error as an energy function. Unlike previous results, the theorems provided here are completely generic and do not depend on the parameterization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood because it does not involve a partition function. Finally, we show how an approximate Metropolis-Hastings MCMC can be setup to recover samples from the estimated distribution, and this is confirmed in sampling experiments." ] }
1703.09964
2951811868
We propose to leverage denoising autoencoder networks as priors to address image restoration problems. We build on the key observation that the output of an optimal denoising autoencoder is a local mean of the true data density, and the autoencoder error (the difference between the output and input of the trained autoencoder) is a mean shift vector. We use the magnitude of this mean shift vector, that is, the distance to the local mean, as the negative log likelihood of our natural image prior. For image restoration, we maximize the likelihood using gradient descent by backpropagating the autoencoder error. A key advantage of our approach is that we do not need to train separate networks for different image restoration tasks, such as non-blind deconvolution with different kernels, or super-resolution at different magnification factors. We demonstrate state of the art results for non-blind deconvolution and super-resolution using the same autoencoding prior.
Our work has an interesting connection to the plug-and-play priors introduced by @cite_32 . They solve regularized inverse (image restoration) problems using ADMM (alternating directions method of multipliers), and they make the key observation that the optimization step involving the prior is a denoising problem, that can be solved with any standard denoiser. @cite_4 leverage this framework to perform super-resolution, and they use the NCSR denoiser @cite_27 based on sparse representations. While their use of a denoiser is a consequence of ADMM, our DAE prior is motivated by its relation to the underlying data density (the distribution of natural images). Our approach leads to a different, simpler gradient descent optimization that does not rely on ADMM.
{ "cite_N": [ "@cite_27", "@cite_4", "@cite_32" ], "mid": [ "1978749115", "2512704900", "2087416986" ], "abstract": [ "Sparse representation models code an image patch as a linear combination of a few atoms chosen out from an over-complete dictionary, and they have shown promising results in various image restoration applications. However, due to the degradation of the observed image (e.g., noisy, blurred, and or down-sampled), the sparse representations by conventional models may not be accurate enough for a faithful reconstruction of the original image. To improve the performance of sparse representation-based image restoration, in this paper the concept of sparse coding noise is introduced, and the goal of image restoration turns to how to suppress the sparse coding noise. To this end, we exploit the image nonlocal self-similarity to obtain good estimates of the sparse coding coefficients of the original image, and then centralize the sparse coding coefficients of the observed image to those estimates. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, while our extensive experiments on various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed NCSR algorithm.", "Denoising and Super-Resolution are two inverse problems that have been extensively studied. Over the years, these two tasks were treated as two distinct problems that deserve a different algorithmic solution. In this paper we wish to exploit the recently introduced Plug-and-Play Prior (PPP) approach [1] to connect between the two. Using the PPP, we turn leading denoisers into super-resolution solvers. As a case-study we demonstrate this on the NCSR algorithm, which has two variants: one for denoising and one for superresolution. We show that by using the NCSR denoiser, one can get equal or even better results when compared with the NCSR super-resolution.", "Model-based reconstruction is a powerful framework for solving a variety of inverse problems in imaging. In recent years, enormous progress has been made in the problem of denoising, a special case of an inverse problem where the forward model is an identity operator. Similarly, great progress has been made in improving model-based inversion when the forward model corresponds to complex physical measurements in applications such as X-ray CT, electron-microscopy, MRI, and ultrasound, to name just a few. However, combining state-of-the-art denoising algorithms (i.e., prior models) with state-of-the-art inversion methods (i.e., forward models) has been a challenge for many reasons. In this paper, we propose a flexible framework that allows state-of-the-art forward models of imaging systems to be matched with state-of-the-art priors or denoising models. This framework, which we term as Plug-and-Play priors, has the advantage that it dramatically simplifies software integration, and moreover, it allows state-of-the-art denoising methods that have no known formulation as an optimization problem to be used. We demonstrate with some simple examples how Plug-and-Play priors can be used to mix and match a wide variety of existing denoising models with a tomographic forward model, thus greatly expanding the range of possible problem solutions." ] }
1703.10065
2604763492
In daily communications, Arabs use local dialects which are hard to identify automatically using conventional classification methods. The dialect identification challenging task becomes more complicated when dealing with an under-resourced dialects belonging to a same county region. In this paper, we start by analyzing statistically Algerian dialects in order to capture their specificities related to prosody information which are extracted at utterance level after a coarse-grained consonant vowel segmentation. According to these analysis findings, we propose a Hierarchical classification approach for spoken Arabic algerian Dialect IDentification (HADID). It takes advantage from the fact that dialects have an inherent property of naturally structured into hierarchy. Within HADID, a top-down hierarchical classification is applied, in which we use Deep Neural Networks (DNNs) method to build a local classifier for every parent node into the hierarchy dialect structure. Our framework is implemented and evaluated on Algerian Arabic dialects corpus. Whereas, the hierarchy dialect structure is deduced from historic and linguistic knowledges. The results reveal that within , the best classifier is DNNs compared to Support Vector Machine. In addition, compared with a baseline Flat classification system, our HADID gives an improvement of 63.5 in term of precision. Furthermore, overall results evidence the suitability of our prosody-based HADID for speaker independent dialect identification while requiring less than 6s test utterances.
Furthermore, @cite_48 employed phone labels segmentation to constrain the acoustic models. They generated dialect models using an SVM classifier with special Kernel function, and they applied this approach on four Arabic Inter-country dialects: Iraqi, Gulf, Levantine and Egyptian.
{ "cite_N": [ "@cite_48" ], "mid": [ "1515008200" ], "abstract": [ "We describe a new approach to automatic dialect and accent recognition which exceeds state-of-the-art performance in three recognition tasks. This approach improves the accuracy and substantially lower the time complexity of our earlier phoneticbased kernel approach for dialect recognition. In contrast to state-of-the-art acoustic-based systems, our approach employs phone labels and segmentation to constrain the acoustic models. Given a speaker’s utterance, we first obtain phone hypotheses using a phone recognizer and then extract GMM-supervectors for each phone type, effectively summarizing the speaker’s phonetic characteristics in a single vector of phone-type supervectors. Using these vectors, we design a kernel function that computes the phonetic similarities between pairs of utterances to train SVM classifiers to identify dialects. Comparing this approach to the state-of-the-art, we obtain a 12.9 relative improvement in EER on Arabic dialects, and a 17.9 relative improvement for American vs. Indian English dialects. We also see a 53.5 relative improvement over a GMM-UBM on American Southern vs. Non-Southern English." ] }
1703.10065
2604763492
In daily communications, Arabs use local dialects which are hard to identify automatically using conventional classification methods. The dialect identification challenging task becomes more complicated when dealing with an under-resourced dialects belonging to a same county region. In this paper, we start by analyzing statistically Algerian dialects in order to capture their specificities related to prosody information which are extracted at utterance level after a coarse-grained consonant vowel segmentation. According to these analysis findings, we propose a Hierarchical classification approach for spoken Arabic algerian Dialect IDentification (HADID). It takes advantage from the fact that dialects have an inherent property of naturally structured into hierarchy. Within HADID, a top-down hierarchical classification is applied, in which we use Deep Neural Networks (DNNs) method to build a local classifier for every parent node into the hierarchy dialect structure. Our framework is implemented and evaluated on Algerian Arabic dialects corpus. Whereas, the hierarchy dialect structure is deduced from historic and linguistic knowledges. The results reveal that within , the best classifier is DNNs compared to Support Vector Machine. In addition, compared with a baseline Flat classification system, our HADID gives an improvement of 63.5 in term of precision. Furthermore, overall results evidence the suitability of our prosody-based HADID for speaker independent dialect identification while requiring less than 6s test utterances.
In contrast of the other acoustic phonetic approaches, only @cite_52 and @cite_38 have proposed ADID system for Intra-country context. @cite_38 have investigated an acoustic approach based on i-vectors method for regional accents recognition. They performed their experiments on Arabic Palestinian accents from four different regions: Jerusalem, Hebron, Nablus and Ramallah. Whereas, @cite_52 designed a GMM-UBM and an i-vectors framework for accent recognition. They implement their experiments on a selected data spoken in three Algerian ares, which are the East, Center and West of Algeria.
{ "cite_N": [ "@cite_38", "@cite_52" ], "mid": [ "2183209506", "659550629" ], "abstract": [ "We attempt to automatically recognize the speaker's accent among regional Arabic Palestinian accents from four different regions of Palestine, i.e. Jerusalem (JE), Hebron (HE), Nablus (NA) and Ramallah (RA). To achieve this goal, we applied the state of the art techniques used in speaker and language identification, namely, Gaussian Mixture Model - Universal Background Model (GMM-UBM), Gaussian Mixture Model - Support Vector Machines (GMM-SVM) and I-vector framework. All of these systems were trained and tested on speech of 200 speakers. GMM-SVM and I-vector systems outperformed the baseline GMM-UBM system. The best result (accuracy of 81.5 ) was obtained by an I-vector system with 64 Gaussian components, compared to an accuracy of 73.4 achieved by human listeners on the same testing utterances.", "Volume I: An Introduction: Preface Typographical conventions and phonetic symbols Part I. Aspects of Accent: 1. Linguistic and social variability 2. Accent phonology 3. How accents differ 4. Why accents differ Part II. Sets and Systems: 5. The reference accents 6. Standard lexical sets 7. Systems: a typology Part III. Developments and Processes: 8. Residualisms 9. British prestige innovations 10. Some American innovations 11. Some further British innovations Sources and further reading References Index." ] }
1703.10065
2604763492
In daily communications, Arabs use local dialects which are hard to identify automatically using conventional classification methods. The dialect identification challenging task becomes more complicated when dealing with an under-resourced dialects belonging to a same county region. In this paper, we start by analyzing statistically Algerian dialects in order to capture their specificities related to prosody information which are extracted at utterance level after a coarse-grained consonant vowel segmentation. According to these analysis findings, we propose a Hierarchical classification approach for spoken Arabic algerian Dialect IDentification (HADID). It takes advantage from the fact that dialects have an inherent property of naturally structured into hierarchy. Within HADID, a top-down hierarchical classification is applied, in which we use Deep Neural Networks (DNNs) method to build a local classifier for every parent node into the hierarchy dialect structure. Our framework is implemented and evaluated on Algerian Arabic dialects corpus. Whereas, the hierarchy dialect structure is deduced from historic and linguistic knowledges. The results reveal that within , the best classifier is DNNs compared to Support Vector Machine. In addition, compared with a baseline Flat classification system, our HADID gives an improvement of 63.5 in term of precision. Furthermore, overall results evidence the suitability of our prosody-based HADID for speaker independent dialect identification while requiring less than 6s test utterances.
@cite_26 have designed an approach based on i-vectors method that combined phonetic and lexical features. They performed their experiments on an Arabic Broadcast speech database of four Arabic dialects Egyptian, Gulf, Levantine, and North Africa.
{ "cite_N": [ "@cite_26" ], "mid": [ "2248508985" ], "abstract": [ "We investigate different approaches for dialect identification in Arabic broadcast speech, using phonetic, lexical features obtained from a speech recognition system, and acoustic features using the i-vector framework. We studied both generative and discriminate classifiers, and we combined these features using a multi-class Support Vector Machine (SVM). We validated our results on an Arabic English language identification task, with an accuracy of 100 . We used these features in a binary classifier to discriminate between Modern Standard Arabic (MSA) and Dialectal Arabic, with an accuracy of 100 . We further report results using the proposed method to discriminate between the five most widely used dialects of Arabic: namely Egyptian, Gulf, Levantine, North African, and MSA, with an accuracy of 52 . We discuss dialect identification errors in the context of dialect code-switching between Dialectal Arabic and MSA, and compare the error pattern between manually labeled data, and the output from our classifier. We also release the train and test data as standard corpus for dialect identification." ] }
1703.10065
2604763492
In daily communications, Arabs use local dialects which are hard to identify automatically using conventional classification methods. The dialect identification challenging task becomes more complicated when dealing with an under-resourced dialects belonging to a same county region. In this paper, we start by analyzing statistically Algerian dialects in order to capture their specificities related to prosody information which are extracted at utterance level after a coarse-grained consonant vowel segmentation. According to these analysis findings, we propose a Hierarchical classification approach for spoken Arabic algerian Dialect IDentification (HADID). It takes advantage from the fact that dialects have an inherent property of naturally structured into hierarchy. Within HADID, a top-down hierarchical classification is applied, in which we use Deep Neural Networks (DNNs) method to build a local classifier for every parent node into the hierarchy dialect structure. Our framework is implemented and evaluated on Algerian Arabic dialects corpus. Whereas, the hierarchy dialect structure is deduced from historic and linguistic knowledges. The results reveal that within , the best classifier is DNNs compared to Support Vector Machine. In addition, compared with a baseline Flat classification system, our HADID gives an improvement of 63.5 in term of precision. Furthermore, overall results evidence the suitability of our prosody-based HADID for speaker independent dialect identification while requiring less than 6s test utterances.
Another work on ADID has been proposed by @cite_20 , which combined the prosodic and phonotactic approaches. In fact, they augmented their phonotactic system, described above, by adding some prosodic features like durations and fundamental frequency measured at n-gram level where grams are syllables. They tested their system on four Arabic dialects: Gulf, Iraqi, Levantine, and Egyptian.
{ "cite_N": [ "@cite_20" ], "mid": [ "1657486103" ], "abstract": [ "While Modern Standard Arabic is the formal spoken and written language of the Arab world, dialects are the major communication mode for everyday life; identifying a speaker’s dialect is thus critical to speech processing tasks such as automatic speech recognition, as well as speaker identification. We examine the role of prosodic features (intonation and rhythm) across four Arabic dialects: Gulf, Iraqi, Levantine, and Egyptian, for the purpose of automatic dialect identification. We show that prosodic features can significantly improve identification, over a purely phonotactic-based approach, with an identification accuracy of 86.33 for 2m utterances." ] }
1703.09788
2952132648
The potential for agents, whether embodied or software, to learn by observing other agents performing procedures involving objects and actions is rich. Current research on automatic procedure learning heavily relies on action labels or video subtitles, even during the evaluation phase, which makes them infeasible in real-world scenarios. This leads to our question: can the human-consensus structure of a procedure be learned from a large set of long, unconstrained videos (e.g., instructional videos from YouTube) with only visual evidence? To answer this question, we introduce the problem of procedure segmentation--to segment a video procedure into category-independent procedure segments. Given that no large-scale dataset is available for this problem, we collect a large-scale procedure segmentation dataset with procedure segments temporally localized and described; we use cooking videos and name the dataset YouCook2. We propose a segment-level recurrent network for generating procedure segments by modeling the dependencies across segments. The generated segments can be used as pre-processing for other tasks, such as dense video captioning and event parsing. We show in our experiments that the proposed model outperforms competitive baselines in procedure segmentation.
The approaches in action detection, especially the recent ones based on action proposal @cite_16 , inspire our idea of segmenting video by proposing segment candidates. Early works on action detection mainly use sliding windows for proposing segments @cite_11 @cite_12 . More recently, @cite_34 propose a multi-stage convolutional network called Segment CNN (SCNN) and achieves state-of-the-art performance @cite_25 . The most similar work to ours is Deep Action Proposals, also DAPs @cite_16 @cite_36 , where the model predicts the likelihood of an action proposal to be an action while in our case segment proposal. DAPs determines fixed proposal locations by clustering over the ground-truth segments, while our model learns to localize procedures with anchor offsets, which is a generalization of the location pattern from training to testing instead of directly transferring.
{ "cite_N": [ "@cite_36", "@cite_16", "@cite_34", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2963916161", "2519328139", "2394849137", "", "", "2084341401" ], "abstract": [ "Most natural videos contain numerous events. For example, in a video of a “man playing a piano”, the video might also contain “another man dancing” or “a crowd clapping”. We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with its unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization.", "Object proposals have contributed significantly to recent advances in object understanding in images. Inspired by the success of this approach, we introduce Deep Action Proposals (DAPs), an effective and efficient algorithm for generating temporal action proposals from long videos. We show how to take advantage of the vast capacity of deep learning models and memory cells to retrieve from untrimmed videos temporal segments, which are likely to contain actions. A comprehensive evaluation indicates that our approach outperforms previous work on a large scale action benchmark, runs at 134 FPS making it practical for large-scale scenarios, and exhibits an appealing ability to generalize, i.e. to retrieve good quality temporal proposals of actions unseen in training.", "We address temporal action localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in temporal action localization via three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments in a long video that may contain actions; (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network; and (3) a localization network fine-tunes on the learned classification network to localize each action instance. We propose a novel loss function for the localization network to explicitly consider temporal overlap and therefore achieve high temporal localization accuracy. Only the proposal network and the localization network are used during prediction. On two large-scale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7 to 7.4 on MEXaction2 and increases from 15.0 to 19.0 on THUMOS 2014, when the overlap threshold for evaluation is set to 0.5.", "", "", "We address the problem of localizing actions, such as opening a door, in hours of challenging video data. We propose a model based on a sequence of atomic action units, termed \"actoms,\" that are semantically meaningful and characteristic for the action. Our actom sequence model (ASM) represents an action as a sequence of histograms of actom-anchored visual features, which can be seen as a temporally structured extension of the bag-of-features. Training requires the annotation of actoms for action examples. At test time, actoms are localized automatically based on a nonparametric model of the distribution of actoms, which also acts as a prior on an action's temporal structure. We present experimental results on two recent benchmarks for action localization \"Coffee and Cigarettes\" and the \"DLSBP\" dataset. We also adapt our approach to a classification-by-localization set-up and demonstrate its applicability on the challenging \"Hollywood 2\" dataset. We show that our ASM method outperforms the current state of the art in temporal action localization, as well as baselines that localize actions with a sliding window method." ] }
1703.09788
2952132648
The potential for agents, whether embodied or software, to learn by observing other agents performing procedures involving objects and actions is rich. Current research on automatic procedure learning heavily relies on action labels or video subtitles, even during the evaluation phase, which makes them infeasible in real-world scenarios. This leads to our question: can the human-consensus structure of a procedure be learned from a large set of long, unconstrained videos (e.g., instructional videos from YouTube) with only visual evidence? To answer this question, we introduce the problem of procedure segmentation--to segment a video procedure into category-independent procedure segments. Given that no large-scale dataset is available for this problem, we collect a large-scale procedure segmentation dataset with procedure segments temporally localized and described; we use cooking videos and name the dataset YouCook2. We propose a segment-level recurrent network for generating procedure segments by modeling the dependencies across segments. The generated segments can be used as pre-processing for other tasks, such as dense video captioning and event parsing. We show in our experiments that the proposed model outperforms competitive baselines in procedure segmentation.
Another topic similar to ours is action segmentation or labeling @cite_15 @cite_4 @cite_0 @cite_13 . It addresses the problem of segmenting a long video into contiguous segments that correspond to a sequence of actions. Most recently, @cite_0 propose to enforce action alignment through frame-wise visual similarities. @cite_4 apply Hidden Markov Models (HMM) to learn the likelihood of image features given hidden action states. Both methods focus on the transitions between adjacent action states, leaving long-range dependencies not captured. Also, these methods generally assume contiguous action segments with limited or no background activities between segments. Yet, background activities are detrimental to the action localization accuracy @cite_0 . We avoid these problems with a segment proposal module followed by a segment-level dependency learning module.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_4", "@cite_13" ], "mid": [ "2951144893", "2952349270", "2530494944", "2949594863" ], "abstract": [ "We propose a weakly-supervised framework for action labeling in video, where only the order of occurring actions is required during training time. The key challenge is that the per-frame alignments between the input (video) and label (action) sequences are unknown during training. We address this by introducing the Extended Connectionist Temporal Classification (ECTC) framework to efficiently evaluate all possible alignments via dynamic programming and explicitly enforce their consistency with frame-to-frame visual similarities. This protects the model from distractions of visually inconsistent or degenerated alignments without the need of temporal supervision. We further extend our framework to the semi-supervised case when a few frames are sparsely annotated in a video. With less than 1 of labeled frames per video, our method is able to outperform existing semi-supervised approaches and achieve comparable performance to that of fully supervised approaches.", "We describe an end-to-end generative approach for the segmentation and recognition of human activities. In this approach, a visual representation based on reduced Fisher Vectors is combined with a structured temporal model for recognition. We show that the statistical properties of Fisher Vectors make them an especially suitable front-end for generative models such as Gaussian mixtures. The system is evaluated for both the recognition of complex activities as well as their parsing into action units. Using a variety of video datasets ranging from human cooking activities to animal behaviors, our experiments demonstrate that the resulting architecture outperforms state-of-the-art approaches for larger datasets, i.e. when sufficient amount of data is available for training structured generative models.", "Abstract We present an approach for weakly supervised learning of human actions from video transcriptions. Our system is based on the idea that, given a sequence of input data and a transcript, i.e. a list of the order the actions occur in the video, it is possible to infer the actions within the video stream and to learn the related action models without the need for any frame-based annotation. Starting from the transcript information at hand, we split the given data sequences uniformly based on the number of expected actions. We then learn action models for each class by maximizing the probability that the training video sequences are generated by the action models given the sequence order as defined by the transcripts. The learned model can be used to temporally segment an unseen video with or without transcript. Additionally, the inferred segments can be used as a starting point to train high-level fully supervised models. We evaluate our approach on four distinct activity datasets, namely Hollywood Extended, MPII Cooking, Breakfast and CRIM13. It shows that the proposed system is able to align the scripted actions with the video data, that the learned models localize and classify actions in the datasets, and that they outperform any current state-of-the-art approach for aligning transcripts with video data.", "We are given a set of video clips, each one annotated with an ordered list of actions, such as \"walk\" then \"sit\" then \"answer phone\" extracted from, for example, the associated text script. We seek to temporally localize the individual actions in each clip as well as to learn a discriminative classifier for each action. We formulate the problem as a weakly supervised temporal assignment with ordering constraints. Each video clip is divided into small time intervals and each time interval of each video clip is assigned one action label, while respecting the order in which the action labels appear in the given annotations. We show that the action label assignment can be determined together with learning a classifier for each action in a discriminative manner. We evaluate the proposed model on a new and challenging dataset of 937 video clips with a total of 787720 frames containing sequences of 16 different actions from 69 Hollywood movies." ] }
1703.09682
2615973800
Previous work on Ramsey multiplicity focused primarily on the multiplicity of complete graphs of constant size. We study the question for larger complete graphs, specifically showing that in every 2-coloring of a complete graph on @math vertices, there are at least @math monochromatic complete subgraphs of size between @math and @math . We also study bounds on the ratio between a maximum and a random red clique in a graph, and the ratio between a maximum and a random monochromatic clique in a graph.
The work most related to this paper is that of Sz ' e kely @cite_12 . He showed that for large enough @math , in any @math -coloring of a complete graph on @math vertices there are at least @math monochromatic complete subgraphs, and there is @math -coloring with at most @math monochromatic complete subgraphs. Our Theorem improves over his lower bound. In the same work, Sz ' e kely also provides upper bounds on Ramsey multiplicities for Half-Ramsey graphs. We prove lower bounds that match his upper bounds (up to low order terms). See Corollary .
{ "cite_N": [ "@cite_12" ], "mid": [ "1999589324" ], "abstract": [ "Let us defineG(n) to be the maximum numberm such that every graph onn vertices contains at leastm homogeneous (i.e. complete or independent) subgraphs. Our main result is exp (0.7214 log2 n) ≧G(n) ≧ exp (0.2275 log2 n), the main tool is a Ramsey—Turan type theorem." ] }
1703.09682
2615973800
Previous work on Ramsey multiplicity focused primarily on the multiplicity of complete graphs of constant size. We study the question for larger complete graphs, specifically showing that in every 2-coloring of a complete graph on @math vertices, there are at least @math monochromatic complete subgraphs of size between @math and @math . We also study bounds on the ratio between a maximum and a random red clique in a graph, and the ratio between a maximum and a random monochromatic clique in a graph.
A survey on Ramsey Multiplicity results was published in 1980 by Burr and Rosta @cite_11 , in which they extend Erd "o s's conjecture to the multiplicity of any subgraph, not just monochromatic complete subgraphs.
{ "cite_N": [ "@cite_11" ], "mid": [ "2038712425" ], "abstract": [ "Ramsey's theorem guarantees that if G is a graph, then any 2-coloring of the edges of a large enough complete graph yields a monochromatic copy of G. Interesting problems arise when one asks how many such G must occur. A survey of this and related problems is given, along with a number of new results." ] }
1703.09682
2615973800
Previous work on Ramsey multiplicity focused primarily on the multiplicity of complete graphs of constant size. We study the question for larger complete graphs, specifically showing that in every 2-coloring of a complete graph on @math vertices, there are at least @math monochromatic complete subgraphs of size between @math and @math . We also study bounds on the ratio between a maximum and a random red clique in a graph, and the ratio between a maximum and a random monochromatic clique in a graph.
The conjecture was later disproved by counterexamples in 1989 by Thomason @cite_14 , who showed that it does not hold for @math . Subsequently, several others worked on upper bounds for @math for small @math . Soon after Thomason's work Franek and R "o dl @cite_13 also gave some different counterexamples based on Cayley graphs for @math . Then in 1994, Jagger, S t'ov ' c ek and Thomason @cite_9 studied for which subgraphs the Burr-Rosta conjecture holds, and found that it does not hold for any graph with @math as a subgraph, which is consistent with the @math result found by Thomason.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_13" ], "mid": [ "", "2100698087", "2056469042" ], "abstract": [ "", "Contre-exemples a la conjecture de Erdos et discussion des proprietes des graphes extremaux", "Abstract Denote by k t ( G ) the number of cliques of order t in the graph G . Let k t (n)= min k t (G) + k t ( G ): |G| = n , where Ḡ denotes the complement of G , and | G | denotes the order of G . Let c t (n) = k t (n) ( n t , and let c t = lim n →∞ c t ( n ). An old conjecture of Erdos (1962), related to Ramsey's theorem, states that c t = 2 1−( t 2 ) . It was shown false by Thomason (1989) for all t ⩾4. We present a class of simply describable Cayley graphs which also show the falsity of Erdos conjecture for t = 4. These graphs were found by a computer search and, although of large orders (2 10 -2 14 ), they are rather simple and highly regular. The smallest upper bound for c 4 obtained by us is 0.976501 x 1 32 , and is given by the graph on the power set of a 10-element set (and, hence, of order 2 10 ) determined by the configuration 1,3,4,7,8,10 . and by the graph on the power set of 11 elements (and, hence, of order 2 11 ) determined by the configuration 1,3,4,7,8,10,11 . It is also shown that the ratio of edges to nonedges in a sequence contradicting the conjecture for t = 4 may approach 1, unlike in the sequences of graphs Thomason used in 1989." ] }
1703.09682
2615973800
Previous work on Ramsey multiplicity focused primarily on the multiplicity of complete graphs of constant size. We study the question for larger complete graphs, specifically showing that in every 2-coloring of a complete graph on @math vertices, there are at least @math monochromatic complete subgraphs of size between @math and @math . We also study bounds on the ratio between a maximum and a random red clique in a graph, and the ratio between a maximum and a random monochromatic clique in a graph.
On the flip side, with regards to the lower bound in Inequality , in 1979 Giraud @cite_1 proved that @math . More recently in 2012, Conlon @cite_5 proved that there must exist at least @math monochromatic complete subgraphs of size @math , in any @math -colouring of the edges of @math , where @math and @math is a constant independent of @math . This result is incomparable with our Theorem .
{ "cite_N": [ "@cite_5", "@cite_1" ], "mid": [ "2019438351", "2066360741" ], "abstract": [ "We show that, for n large, there must exist at least @math monochromatic K t s in any two-colouring of the edges of K n , where C≈2.18 is an explicitly defined constant. The old lower bound, due to Erdős [2], and based upon the standard bounds for Ramsey’s theorem, is @math", "Resume Dans cet article qui developpe une Note ( Giraud, G., C. R. Acad. Sci. Paris Ser. A , 276 (1973) 1173–1175), nous prouvons que, pour n assez grand, tout bicoloriage des aretes de K n contient une proportion de K 4 dont les six aretes sont de meme couleur au moins egale a 1 46 , et que si le nombre de triangles unicolores est minimal, cette proportion passe au moins a (5 − 10 1 2 ) 64 a O(n 7 2 ) pres. Nous exposons aussi une methode de majoration des nombres de Ramsey binaires-bicolores qui fournit ϱ(6, 6) ≤ 169." ] }
1703.09682
2615973800
Previous work on Ramsey multiplicity focused primarily on the multiplicity of complete graphs of constant size. We study the question for larger complete graphs, specifically showing that in every 2-coloring of a complete graph on @math vertices, there are at least @math monochromatic complete subgraphs of size between @math and @math . We also study bounds on the ratio between a maximum and a random red clique in a graph, and the ratio between a maximum and a random monochromatic clique in a graph.
In 1995, Shearer @cite_4 used the probabilistic method to prove that @math , where @math is the size of the maximum independent set and @math is the average degree in the graph. Following his technique, Alon @cite_0 proved that for a graph in which the neighborhood of every vertex is @math -colorable, @math for some constant @math . Note that an @math -colorable graph is @math -free, since a clique can contain at most one vertex of each color.
{ "cite_N": [ "@cite_0", "@cite_4" ], "mid": [ "2078604563", "2162218992" ], "abstract": [ "Let G = (V,E) be a graph on n vertices with average degree t ≥ 1 in which for every vertex v ∈ V the induced subgraph on the set of all neighbors of v is r-colorable. We show that the independence number of G is at least c log (r+1) n t log t, for some absolute positive constant c. This strengthens a well known result of Ajtai, Komlos and Szemeredi. Combining their result with some probabilistic arguments, we prove the following Ramsey type theorem, conjectured by Erdos in 1979. There exists an absolute constant c′ > 0 so that in every graph on n vertices in which any set of b √ nc vertices contains at least one edge, there is some set of b √ nc vertices that contains at least c′ √ n log n edges.", "Let G be a regular graph of degree d on n points which contains no Kr (r ≥ 4). Let α be the independence number of G. Then we show for large d that α ≥ c(r)n ***image***. © 1995 John Wiley & Sons, Inc." ] }
1703.09682
2615973800
Previous work on Ramsey multiplicity focused primarily on the multiplicity of complete graphs of constant size. We study the question for larger complete graphs, specifically showing that in every 2-coloring of a complete graph on @math vertices, there are at least @math monochromatic complete subgraphs of size between @math and @math . We also study bounds on the ratio between a maximum and a random red clique in a graph, and the ratio between a maximum and a random monochromatic clique in a graph.
The latest improvement for @math -free graphs is due to Bansal, Gupta and Guruganesh @cite_2 , proving that @math . There is still a gap in this question, the upper bound being @math for @math -free graphs (also given in @cite_2 ). All three of these papers actually prove that a random independent set in @math is of the given size, and then conclude that therefore the maximum independent set must be at least that size as well.
{ "cite_N": [ "@cite_2" ], "mid": [ "2950908404" ], "abstract": [ "We consider the maximum independent set problem on graphs with maximum degree @math . We show that the integrality gap of the Lov 'asz @math -function based SDP is @math . This improves on the previous best result of @math , and almost matches the integrality gap of @math recently shown for stronger SDPs, namely those obtained using poly- @math levels of the @math semidefinite hierarchy. The improvement comes from an improved Ramsey-theoretic bound on the independence number of @math -free graphs for large values of @math . We also show how to obtain an algorithmic version of the above-mentioned @math -based integrality gap result, via a coloring algorithm of Johansson. The resulting approximation guarantee of @math matches the best unique-games-based hardness result up to lower-order poly- @math factors." ] }
1703.09529
2950725920
We present a semantic part detection approach that effectively leverages object information.We use the object appearance and its class as indicators of what parts to expect. We also model the expected relative location of parts inside the objects based on their appearance. We achieve this with a new network module, called OffsetNet, that efficiently predicts a variable number of part locations within a given object. Our model incorporates all these cues to detect parts in the context of their objects. This leads to considerably higher performance for the challenging task of part detection compared to using part appearance alone (+5 mAP on the PASCAL-Part dataset). We also compare to other part detection methods on both PASCAL-Part and CUB200-2011 datasets.
The Deformable Part Model (DPM) @cite_10 detects objects as collections of parts, which are localized by local part appearance using HOG @cite_16 templates. Most models based on DPM @cite_10 @cite_29 @cite_50 @cite_8 @cite_44 @cite_0 @cite_12 @cite_37 @cite_41 consider parts as any image patch that is discriminative for the object class. In our work instead we are interested in semantic parts , i.e. an object region interpretable and nameable by humans (e.g. saddle').
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_41", "@cite_29", "@cite_44", "@cite_0", "@cite_50", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "228908394", "", "", "2099528205", "2295689258", "124653583", "2111650570", "", "2168356304", "2950245566" ], "abstract": [ "We propose a clustering method that considers non-rigid alignment of samples. The motivation for such a clustering is training of object detectors that consist of multiple mixture components. In particular, we consider the deformable part model (DPM) of , where each mixture component includes a learned deformation model. We show that alignment based clustering distributes the data better to the mixture components of the DPM than previous methods. Moreover, the alignment helps the non-convex optimization of the DPM find a consistent placement of its parts and, thus, learn more accurate part filters.", "", "", "Weakly supervised discovery of common visual structure in highly variable, cluttered images is a key problem in recognition. We address this problem using deformable part-based models (DPM's) with latent SVM training [6]. These models have been introduced for fully supervised training of object detectors, but we demonstrate that they are also capable of more open-ended learning of latent structure for such tasks as scene recognition and weakly supervised object localization. For scene recognition, DPM's can capture recurring visual elements and salient objects; in combination with standard global image features, they obtain state-of-the-art results on the MIT 67-category indoor scene dataset. For weakly supervised object localization, optimization over latent DPM parameters can discover the spatial extent of objects in cluttered training images without ground-truth bounding boxes. The resulting method outperforms a recent state-of-the-art weakly supervised object localization approach on the PASCAL-07 dataset.", "Face detection using part based model becomes a new trend in Computer Vision. Following this trend, we propose an extension of Deformable Part Models to detect faces which increases not only precision but also speed compared with current versions of DPM. First, to reduce computation cost, we create a lookup table instead of repeatedly calculating scores in each processing step by approximating inner product between HOG features and weight vectors. Furthermore, early cascading method is also introduced to boost up speed. Second, we propose new integrated model for face representation and its score of detection. Besides, the intuitive non-maximum suppression is also proposed to get more accuracy in detecting result. We evaluate the merit of our method on the public dataset Face Detection Data Set and Benchmark (FDDB). Experimental results shows that our proposed method can significantly boost 5.5 times in speed of DPM method for face detection while achieve up to 94.64 the accuracy of the state-of-the-art technique. This leads to a promising way to combine DPM with other techniques to solve difficulties of face detection in the wild.", "We describe an implementation of the Deformable Parts Model [1] that operates in a user-defined time-frame. Our implementation uses a variety of mechanism to trade-off speed against accuracy. Our implementation can detect all 20 PASCAL 2007 objects simultaneously at 30Hz with an mAP of 0.26. At 15Hz, its mAP is 0.30; and at 100Hz, its mAP is 0.16. By comparison the reference implementation of [1] runs at 0.07Hz and mAP of 0.33 and a fast GPU implementation runs at 1Hz. Our technique is over an order of magnitude faster than the previous fastest DPM implementation. Our implementation exploits a series of important speedup mechanisms. We use the cascade framework of [3] and the vector quantization technique of [2]. To speed up feature computation, we compute HOG features at few scales, and apply many interpolated templates. A hierarchical vector quantization method is used to compress HOG features for fast template evaluation. An object proposal step uses hash-table methods to identify locations where evaluating templates would be most useful; these locations are inserted into a priority queue, and processed in a detection phase. Both proposal and detection phases have an any-time property. Our method applies to legacy templates, and no retraining is required.", "We propose a method to learn a diverse collection of discriminative parts from object bounding box annotations. Part detectors can be trained and applied individually, which simplifies learning and extension to new features or categories. We apply the parts to object category detection, pooling part detections within bottom-up proposed regions and using a boosted classifier with proposed sigmoid weak learners for scoring. On PASCAL VOC 2010, we evaluate the part detectors' ability to discriminate and localize annotated key points. Our detection system is competitive with the best-existing systems, outperforming other HOG-based detectors on the more deformable categories.", "", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "The main stated contribution of the Deformable Parts Model (DPM) detector of (over the Histogram-of-Oriented-Gradients approach of Dalal and Triggs) is the use of deformable parts. A secondary contribution is the latent discriminative learning. Tertiary is the use of multiple components. A common belief in the vision community (including ours, before this study) is that their ordering of contributions reflects the performance of detector in practice. However, what we have experimentally found is that the ordering of importance might actually be the reverse. First, we show that by increasing the number of components, and switching the initialization step from their aspect-ratio, left-right flipping heuristics to appearance-based clustering, considerable improvement in performance is obtained. But more intriguingly, we show that with these new components, the part deformations can now be completely switched off, yet obtaining results that are almost on par with the original DPM detector. Finally, we also show initial results for using multiple components on a different problem -- scene classification, suggesting that this idea might have wider applications in addition to object detection." ] }
1703.09529
2950725920
We present a semantic part detection approach that effectively leverages object information.We use the object appearance and its class as indicators of what parts to expect. We also model the expected relative location of parts inside the objects based on their appearance. We achieve this with a new network module, called OffsetNet, that efficiently predicts a variable number of part locations within a given object. Our model incorporates all these cues to detect parts in the context of their objects. This leads to considerably higher performance for the challenging task of part detection compared to using part appearance alone (+5 mAP on the PASCAL-Part dataset). We also compare to other part detection methods on both PASCAL-Part and CUB200-2011 datasets.
In recent years, CNN-based representations are quickly replacing hand-crafted features @cite_16 @cite_54 in many domains, including semantic part-based models @cite_55 @cite_25 @cite_9 @cite_5 @cite_38 @cite_26 @cite_61 @cite_34 @cite_23 @cite_53 @cite_40 @cite_63 @cite_22 @cite_6 . Our work is related to those that explicitly train CNN models to localize semantic parts using bounding-boxes @cite_9 @cite_53 @cite_27 @cite_39 , as opposed to keypoints @cite_38 @cite_63 or segmentation masks @cite_5 @cite_26 @cite_61 @cite_34 @cite_23 @cite_40 . Many of these works @cite_9 @cite_22 @cite_63 @cite_38 @cite_40 detect the parts used in their models based only on local part appearance, independently of their objects. Moreover, they use parts as a means for object or action and attribute recognition, they are not interested in part detection itself.
{ "cite_N": [ "@cite_61", "@cite_38", "@cite_26", "@cite_22", "@cite_55", "@cite_54", "@cite_9", "@cite_53", "@cite_34", "@cite_6", "@cite_39", "@cite_40", "@cite_27", "@cite_23", "@cite_63", "@cite_5", "@cite_16", "@cite_25" ], "mid": [ "1903370114", "", "2951729963", "1928906481", "", "2151103935", "2950918464", "", "792160549", "2519599897", "1941385496", "2950557924", "", "2346977708", "2949820118", "", "", "2155394491" ], "abstract": [ "In this paper, we study the problem of semantic part segmentation for animals. This is more challenging than standard object detection, object segmentation and pose estimation tasks because semantic parts of animals often have similar appearance and highly varying shapes. To tackle these challenges, we build a mixture of compositional models to represent the object boundary and the boundaries of semantic parts. And we incorporate edge, appearance, and semantic part cues into the compositional model. Given part-level segmentation annotation, we develop a novel algorithm to learn a mixture of compositional models under various poses and viewpoints for certain animal classes. Furthermore, a linear complexity algorithm is offered for efficient inference of the compositional model using dynamic programming. We evaluate our method for horse and cow using a newly annotated dataset on Pascal VOC 2010 which has pixelwise part labels. Experimental results demonstrate the effectiveness of our method.", "", "By taking the semantic object parsing task as an exemplar application scenario, we propose the Graph Long Short-Term Memory (Graph LSTM) network, which is the generalization of LSTM from sequential data or multi-dimensional data to general graph-structured data. Particularly, instead of evenly and fixedly dividing an image to pixels or patches in existing multi-dimensional LSTM structures (e.g., Row, Grid and Diagonal LSTMs), we take each arbitrary-shaped superpixel as a semantically consistent node, and adaptively construct an undirected graph for each image, where the spatial relations of the superpixels are naturally used as edges. Constructed on such an adaptive graph topology, the Graph LSTM is more naturally aligned with the visual patterns in the image (e.g., object boundaries or appearance similarities) and provides a more economical information propagation route. Furthermore, for each optimization step over Graph LSTM, we propose to use a confidence-driven scheme to update the hidden and memory states of nodes progressively till all nodes are updated. In addition, for each node, the forgets gates are adaptively learned to capture different degrees of semantic correlation with neighboring nodes. Comprehensive evaluations on four diverse semantic object parsing datasets well demonstrate the significant superiority of our Graph LSTM over other state-of-the-art solutions.", "Fine-grained classification is challenging because categories can only be discriminated by subtle and local differences. Variances in the pose, scale or rotation usually make the problem more difficult. Most fine-grained classification systems follow the pipeline of finding foreground object or object parts (where) to extract discriminative features (what).", "", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "We investigate the importance of parts for the tasks of action and attribute classification. We develop a part-based approach by leveraging convolutional network features inspired by recent advances in computer vision. Our part detectors are a deep version of poselets and capture parts of the human body under a distinct set of poses. For the tasks of action and attribute classification, we train holistic convolutional neural networks and show that adding parts leads to top-performing results for both tasks. In addition, we demonstrate the effectiveness of our approach when we replace an oracle person detector, as is the default in the current evaluation protocol for both tasks, with a state-of-the-art person detection system.", "", "Segmenting semantic objects from images and parsing them into their respective semantic parts are fundamental steps towards detailed object understanding in computer vision. In this paper, we propose a joint solution that tackles semantic object and part segmentation simultaneously, in which higher object-level context is provided to guide part segmentation, and more detailed part-level localization is utilized to refine object segmentation. Specifically, we first introduce the concept of semantic compositional parts (SCP) in which similar semantic parts are grouped and shared among different objects. A two-channel fully convolutional network (FCN) is then trained to provide the SCP and object potentials at each pixel. At the same time, a compact set of segments can also be obtained from the SCP predictions of the network. Given the potentials and the generated segments, in order to explore long-range context, we finally construct an efficient fully connected conditional random field (FCRF) to jointly predict the final object and part labels. Extensive evaluation on three different datasets shows that our approach can mutually enhance the performance of object and part segmentation, and outperforms the current state-of-the-art on both tasks.", "We propose a technique to train semantic part-based models of object classes from Google Images. Our models encompass the appearance of parts and their spatial arrangement on the object, specific to each viewpoint. We learn these rich models by collecting training instances for both parts and objects, and automatically connecting the two levels. Our framework works incrementally, by learning from easy examples first, and then gradually adapting to harder ones. A key benefit of this approach is that it requires no manual part location annotations. We evaluate our models on the challenging PASCAL-Part dataset [1] and show how their performance increases at every step of the learning, with the final models more than doubling the performance of directly training from images retrieved by querying for part names (from 12.9 to 27.2 AP). Moreover, we show that our part models can help object detection performance by enriching the R-CNN detector with parts.", "Learning models for object detection is a challenging problem due to the large intra-class variability of objects in appearance, viewpoints, and rigidity. We address this variability by a novel feature pooling method that is adaptive to segmented regions. The proposed detection algorithm automatically discovers a diverse set of exemplars and their distinctive parts which are used to encode the region structure by the proposed feature pooling method. Based on each exemplar and its parts, a regression model is learned with samples selected by a coarse region matching scheme. The proposed algorithm performs favorably on the PASCAL VOC 2007 dataset against existing algorithms. We demonstrate the benefits of our feature pooling method when compared to conventional spatial pyramid pooling features. We also show that object information can be transferred through exemplars for detected objects.", "In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.", "", "Parsing articulated objects, e.g. humans and animals, into semantic parts (e.g. body, head and arms, etc.) from natural images is a challenging and fundamental problem for computer vision. A big difficulty is the large variability of scale and location for objects and their corresponding parts. Even limited mistakes in estimating scale and location will degrade the parsing output and cause errors in boundary details. To tackle these difficulties, we propose a \"Hierarchical Auto-Zoom Net\" (HAZN) for object part parsing which adapts to the local scales of objects and parts. HAZN is a sequence of two \"Auto-Zoom Net\" (AZNs), each employing fully convolutional networks that perform two tasks: (1) predict the locations and scales of object instances (the first AZN) or their parts (the second AZN); (2) estimate the part scores for predicted object instance or part regions. Our model can adaptively \"zoom\" (resize) predicted image regions into their proper scales to refine the parsing. We conduct extensive experiments over the PASCAL part datasets on humans, horses, and cows. For humans, our approach significantly outperforms the state-of-the-arts by 5 mIOU and is especially better at segmenting small instances and small parts. We obtain similar improvements for parsing cows and horses over alternative methods. In summary, our strategy of first zooming into objects and then zooming into parts is very effective. It also enables us to process different regions of the image at different scales adaptively so that, for example, we do not need to waste computational resources scaling the entire image.", "Part models of object categories are essential for challenging recognition tasks, where differences in categories are subtle and only reflected in appearances of small parts of the object. We present an approach that is able to learn part models in a completely unsupervised manner, without part annotations and even without given bounding boxes during learning. The key idea is to find constellations of neural activation patterns computed using convolutional neural networks. In our experiments, we outperform existing approaches for fine-grained recognition on the CUB200-2011, NA birds, Oxford PETS, and Oxford Flowers dataset in case no part or bounding box annotations are available and achieve state-of-the-art performance for the Stanford Dog dataset. We also show the benefits of neural constellation models as a data augmentation technique for fine-tuning. Furthermore, our paper unites the areas of generic and fine-grained classification, since our approach is suitable for both scenarios. The source code of our method is available online at this http URL", "", "", "We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training." ] }
1703.09529
2950725920
We present a semantic part detection approach that effectively leverages object information.We use the object appearance and its class as indicators of what parts to expect. We also model the expected relative location of parts inside the objects based on their appearance. We achieve this with a new network module, called OffsetNet, that efficiently predicts a variable number of part locations within a given object. Our model incorporates all these cues to detect parts in the context of their objects. This leads to considerably higher performance for the challenging task of part detection compared to using part appearance alone (+5 mAP on the PASCAL-Part dataset). We also compare to other part detection methods on both PASCAL-Part and CUB200-2011 datasets.
Several fine-grained recognition works @cite_11 @cite_35 @cite_39 use nearest-neighbors to transfer part location annotations from training objects to test objects. They do not perform object detection, as ground-truth object bounding-boxes are used at both training and test time. Here, instead, at test time we jointly detect objects and their semantic parts.
{ "cite_N": [ "@cite_35", "@cite_39", "@cite_11" ], "mid": [ "66901128", "1941385496", "2157035885" ], "abstract": [ "In this paper, we propose a novel part-pair representation for part localization. In this representation, an object is treated as a collection of part pairs to model its shape and appearance. By changing the set of pairs to be used, we are able to impose either stronger or weaker geometric constraints on the part configuration. As for the appearance, we build pair detectors for each part pair, which model the appearance of an object at different levels of granularities. Our method of part localization exploits the part-pair representation, featuring the combination of non-parametric exemplars and parametric regression models. Non-parametric exemplars help generate reliable part hypotheses from very noisy pair detections. Then, the regression models are used to group the part hypotheses in a flexible way to predict the part locations. We evaluate our method extensively on the dataset CUB-200-2011 [32], where we achieve significant improvement over the state-of-the-art method on bird part localization. We also experiment with human pose estimation, where our method produces comparable results to existing works.", "Learning models for object detection is a challenging problem due to the large intra-class variability of objects in appearance, viewpoints, and rigidity. We address this variability by a novel feature pooling method that is adaptive to segmented regions. The proposed detection algorithm automatically discovers a diverse set of exemplars and their distinctive parts which are used to encode the region structure by the proposed feature pooling method. Based on each exemplar and its parts, a regression model is learned with samples selected by a coarse region matching scheme. The proposed algorithm performs favorably on the PASCAL VOC 2007 dataset against existing algorithms. We demonstrate the benefits of our feature pooling method when compared to conventional spatial pyramid pooling features. We also show that object information can be transferred through exemplars for detected objects.", "Current object recognition systems can only recognize a limited number of object categories; scaling up to many categories is the next challenge. We seek to build a system to recognize and localize many different object categories in complex scenes. We achieve this through a simple approach: by matching the input image, in an appropriate representation, to images in a large training set of labeled images. Due to regularities in object identities across similar scenes, the retrieved matches provide hypotheses for object identities and locations. We build a probabilistic model to transfer the labels from the retrieval set to the input image. We demonstrate the effectiveness of this approach and study algorithm component contributions using held-out test sets from the LabelMe database." ] }
1703.09603
2949344592
Committing to a version control system means submitting a software change to the system. Each commit can have a message to describe the submission. Several approaches have been proposed to automatically generate the content of such messages. However, the quality of the automatically generated messages falls far short of what humans write. In studying the differences between auto-generated and human-written messages, we found that 82 of the human-written messages have only one sentence, while the automatically generated messages often have multiple lines. Furthermore, we found that the commit messages often begin with a verb followed by an direct object. This finding inspired us to use a "verb+object" format in this paper to generate short commit summaries. We split the approach into two parts: verb generation and object generation. As our first try, we trained a classifier to classify a diff to a verb. We are seeking feedback from the community before we continue to work on generating direct objects for the commits.
Several empirical studies about commits messages have been conducted for commit classification and commit message generation @cite_8 @cite_9 @cite_0 @cite_3 @cite_2 . For example, @cite_8 manually inspected the existing release notes before they designed an approach to generate release notes automatically. Buse and Weimer @cite_9 conducted a similar manual inspection for automatic commit message generation. Like these previous studies, our exploratory data analysis aims to gain insights for our approach of generating commit messages. Different from the previous studies, we used natural language processing (NLP) techniques, which help us to mine information from the existing commit messages automatically and confirm hypotheses on a large data set. Besides manual inspection, the previous studies also computed the sizes of commit messages and analyzed the messages as bags of words @cite_3 @cite_2 . In contrast, we are able to conduct grammar analysis on the commit messages. The grammar analysis leaded to a key finding that shaped our approach.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_3", "@cite_0", "@cite_2" ], "mid": [ "1997322537", "2057049321", "2018638699", "2150198410", "2146648240" ], "abstract": [ "This paper introduces ARENA (Automatic RElease Notes generAtor), an approach for the automatic generation of release notes. ARENA extracts changes from the source code, summarizes them, and integrates them with information from versioning systems and issue trackers. It was designed based on the manual analysis of 1,000 existing release notes. To evaluate the quality of the ARENA release notes, we performed three empirical studies involving a total of 53 participants (45 professional developers and 8 students). The results indicate that the ARENA release notes are very good approximations of those produced by developers and often include important information that is missing in the manually produced release notes.", "Source code modifications are often documented with log messages. Such messages are a key component of software maintenance: they can help developers validate changes, locate and triage defects, and understand modifications. However, this documentation can be burdensome to create and can be incomplete or inaccurate. We present an automatic technique for synthesizing succinct human-readable documentation for arbitrary program differences. Our algorithm is based on a combination of symbolic execution and a novel approach to code summarization. The documentation it produces describes the effect of a change on the runtime behavior of a program, including the conditions under which program behavior changes and what the new behavior is. We compare our documentation to 250 human-written log messages from 5 popular open source projects. Employing a human study, we find that our generated documentation is suitable for supplementing or replacing 89 of existing log messages that directly describe a code change.", "Information contained in versioning system commits has been frequently used to support software evolution research. Concomitantly, some researchers have tried to relate commits to certain activities, e.g., large commits are more likely to be originated from code management activities, while small ones are related to development activities. However, these characterizations are vague, because there is no consistent definition of what is a small or a large commit. In this paper, we study the nature of commits in two dimensions. First, we define the size of commits in terms of number of files, and then we classify commits based on the content of their comments. To perform this study, we use the history log of nine large open source projects.", "The research examines the version histories of nine open source software systems to uncover trends and characteristics of how developers commit source code to version control systems (e.g., subversion). The goal is to characterize what a typical or normal commit looks like with respect to the number of files, number of lines, and number of hunks committed together. The results of these three characteristics are presented and the commits are categorized from extra small to extra large. The findings show that approximately 75 of commits are quite small for the systems examined along all three characteristics. Additionally, the commit messages are examined along with the characteristics. The most common words are extracted from the commit messages and correlated with the size categories of the commits. It is observed that sized categories can be indicative of the types of maintenance activities being performed.", "Large software systems undergo significant evolution during their lifespan, yet often individual changes are not well documented. In this work, we seek to automatically classify large changes into various categories of maintenance tasks — corrective, adaptive, perfective, feature addition, and non-functional improvement — using machine learning techniques. In a previous paper, we found that many commits could be classified easily and reliably based solely on the manual analysis of the commit metadata and commit messages (i.e., without reference to the source code). Our extension is the automation of classification by training Machine Learners on features extracted from the commit metadata, such as the word distribution of a commit message, commit author, and modules modified. We validated the results of the learners via 10-fold cross validation, which achieved accuracies consistently above 50 , indicating good to fair results. We found that the identity of the author of a commit provided much information about the maintenance class of a commit, almost as much as the words of the commit message. This implies that for most large commits, the Source Control System (SCS) commit messages plus the commit author identity is enough information to accurately and automatically categorize the nature of the maintenance task." ] }
1703.09603
2949344592
Committing to a version control system means submitting a software change to the system. Each commit can have a message to describe the submission. Several approaches have been proposed to automatically generate the content of such messages. However, the quality of the automatically generated messages falls far short of what humans write. In studying the differences between auto-generated and human-written messages, we found that 82 of the human-written messages have only one sentence, while the automatically generated messages often have multiple lines. Furthermore, we found that the commit messages often begin with a verb followed by an direct object. This finding inspired us to use a "verb+object" format in this paper to generate short commit summaries. We split the approach into two parts: verb generation and object generation. As our first try, we trained a classifier to classify a diff to a verb. We are seeking feedback from the community before we continue to work on generating direct objects for the commits.
There are many empirical studies about the changes in commits @cite_6 @cite_2 @cite_0 @cite_3 . For example, studied change types based on their syntax differencing technique @cite_6 . Currently, we have not conducted an empirical study on the commit changes, but we plan to study the content of the diff files in the future. Instead of looking for change types, we will study whether there are overlapped words in the commit messages and their diff files and where we can locate the overlapped words in the diff files.
{ "cite_N": [ "@cite_0", "@cite_3", "@cite_6", "@cite_2" ], "mid": [ "2150198410", "2018638699", "", "2146648240" ], "abstract": [ "The research examines the version histories of nine open source software systems to uncover trends and characteristics of how developers commit source code to version control systems (e.g., subversion). The goal is to characterize what a typical or normal commit looks like with respect to the number of files, number of lines, and number of hunks committed together. The results of these three characteristics are presented and the commits are categorized from extra small to extra large. The findings show that approximately 75 of commits are quite small for the systems examined along all three characteristics. Additionally, the commit messages are examined along with the characteristics. The most common words are extracted from the commit messages and correlated with the size categories of the commits. It is observed that sized categories can be indicative of the types of maintenance activities being performed.", "Information contained in versioning system commits has been frequently used to support software evolution research. Concomitantly, some researchers have tried to relate commits to certain activities, e.g., large commits are more likely to be originated from code management activities, while small ones are related to development activities. However, these characterizations are vague, because there is no consistent definition of what is a small or a large commit. In this paper, we study the nature of commits in two dimensions. First, we define the size of commits in terms of number of files, and then we classify commits based on the content of their comments. To perform this study, we use the history log of nine large open source projects.", "", "Large software systems undergo significant evolution during their lifespan, yet often individual changes are not well documented. In this work, we seek to automatically classify large changes into various categories of maintenance tasks — corrective, adaptive, perfective, feature addition, and non-functional improvement — using machine learning techniques. In a previous paper, we found that many commits could be classified easily and reliably based solely on the manual analysis of the commit metadata and commit messages (i.e., without reference to the source code). Our extension is the automation of classification by training Machine Learners on features extracted from the commit metadata, such as the word distribution of a commit message, commit author, and modules modified. We validated the results of the learners via 10-fold cross validation, which achieved accuracies consistently above 50 , indicating good to fair results. We found that the identity of the author of a commit provided much information about the maintenance class of a commit, almost as much as the words of the commit message. This implies that for most large commits, the Source Control System (SCS) commit messages plus the commit author identity is enough information to accurately and automatically categorize the nature of the maintenance task." ] }
1703.09784
2601295960
This paper investigates a novel task of generating texture images from perceptual descriptions. Previous work on texture generation focused on either synthesis from examples or generation from procedural models. Generating textures from perceptual attributes have not been well studied yet. Meanwhile, perceptual attributes, such as directionality, regularity and roughness are important factors for human observers to describe a texture. In this paper, we propose a joint deep network model that combines adversarial training and perceptual feature regression for texture generation, while only random noise and user-defined perceptual attributes are required as input. In this model, a preliminary trained convolutional neural network is essentially integrated with the adversarial framework, which can drive the generated textures to possess given perceptual attributes. An important aspect of the proposed model is that, if we change one of the input perceptual features, the corresponding appearance of the generated textures will also be changed. We design several experiments to validate the effectiveness of the proposed method. The results show that the proposed method can produce high quality texture images with desired perceptual properties.
Textures have attracted widespread attention in the research field of visual perception and computer vision. identified the perceptual features people used to classify the textures and also established the correlation between semantic attributes and textures @cite_20 , which showed the importance of perceptual features for understanding texture images. Meanwhile, texture synthesis and texture generation have been active research areas for many years. proposed a pixel-based method for texture synthesis with non-parametric sampling @cite_12 , and Wei proposed an efficient algorithm using tree-structured vector quantization for realistic texture synthesis, which required only a sample texture as input @cite_9 . These studies normally concern on example based texture synthesis, whereas our work focuses on generating textures according to user-defined perceptual attributes.
{ "cite_N": [ "@cite_9", "@cite_12", "@cite_20" ], "mid": [ "2232702494", "", "2098031085" ], "abstract": [ "Texture synthesis is important for many applications in computer graphics, vision, and image processing. However, it remains difficult to design an algorithm that is both efficient and capable of generating high quality results. In this paper, we present an efficient algorithm for realistic texture synthesis. The algorithm is easy to use and requires only a sample texture as input. It generates textures with perceived quality equal to or better than those produced by previous techniques, but runs two orders of magnitude faster. This permits us to apply texture synthesis to problems where it has traditionally been considered impractical. In particular, we have applied it to constrained synthesis for image editing and temporal texture generation. Our algorithm is derived from Markov Random Field texture models and generates textures through a deterministic searching process. We accelerate this synthesis process using tree-structured vector quantization.", "", "Abstract In this paper we present the results of two experiments. The first is on the categorization of texture words in the English language. The goal was to determine whether there is a common basis for subjects' groupings of words related to visual texture, and if so, to identify the underlying dimensions used to categorize those words. Eleven major clusters were identified through hierarchical cluster analysis, ranging from ‘random’ to ‘repetitive’. These clusters remained intact in a multidimensional scaling solution. The stress for a three-dimensional solution obtained through multidimensional scaling was 0.18, meaning that 82 of the variance in the data is explained through the use of three dimensions. It appears that the major dimensions of texture descriptors are repetitive versus nonrepetitive; linearly oriented versus circularly oriented; and simple versus complex. In the second experiment we measured the strength of association between texture words and texture images. The goal was to determine whether there is any systematic correspondence between the domains of texture words and texture images. Pearson's coefficient of contingency, a measure of the strength of association, was found to be 0.63 for words corresponding to given images and 0.56 for images corresponding to given words. Thus the texture categories in the verbal space and those in the visual space are strongly tied. In sum, our two experiments show (a) that despite the tremendous variety in the words we have to describe textures, there is an underlying structure to the lexical space which can be derived from the experimental data; and (b) that the association between a category of words and a category of images was strongest when both categories represent the same underlying property. This suggests that subjects' organizations of texture terms are systematically tied to their organization of texture images." ] }
1703.09575
2954977374
In this paper, we propose a framework for cross-layer optimization to ensure ultra-high reliability and ultra-low latency in radio access networks, where both transmission delay and queueing delay are considered. With short transmission time, the blocklength of channel codes is finite, and the Shannon Capacity cannot be used to characterize the maximal achievable rate with given transmission error probability. With randomly arrived packets, some packets may violate the queueing delay. Moreover, since the queueing delay is shorter than the channel coherence time in typical scenarios, the required transmit power to guarantee the queueing delay and transmission error probability will become unbounded even with spatial diversity. To ensure the required quality-of-service (QoS) with finite transmit power, a proactive packet dropping mechanism is introduced. Then, the overall packet loss probability includes transmission error probability, queueing delay violation probability, and packet dropping probability. We optimize the packet dropping policy, power allocation policy, and bandwidth allocation policy to minimize the transmit power under the QoS constraint. The optimal solution is obtained, which depends on both channel and queue state information. Simulation and numerical results validate our analysis, and show that setting packet loss probabilities equal is a near optimal solution.
While reducing latency in wireless networks is challenging, further ensuring high reliability makes the problem more intricate. To reduce the delay caused by transmission and signalling @cite_15 , a short frame structure was introduced in @cite_16 , and the TTI was set identical to the frame duration. To ensure high reliability of transmission with short frame, proper channel coding with finite blocklength is important. Fortunately, the results in @cite_26 indicate that it is possible to guarantee very low transmission error probability with short blocklength channel codes, at the expense of achievable rate reduction. By using practical coding schemes like Polar codes @cite_11 , the delays caused by transmission, signal processing and coding can be reduced.
{ "cite_N": [ "@cite_15", "@cite_16", "@cite_26", "@cite_11" ], "mid": [ "2290213472", "1509397529", "2106864314", "2061174739" ], "abstract": [ "Future generation of wireless networks, i.e. 5G, is envisioned to support several new use-cases demanding transmission reliability and latency that cannot be achieved by the current cellular networks such as long-term evolution (LTE). This paper looks at different design aspects of the control channel(s) to support ultra-reliable low-latency communication considering factory automation as an example scenario. In particular, we show that a fairly balanced design for both the uplink and the downlink control channels can be made given an appropriate selection of modulation, coding, diversity scheme, and time frequency resources. By means of link-level simulations, we also show that the proposed control channel design supports a block-error rate of 10-9 under Rayleigh fading conditions at a signal-to-interference-plus-noise ratio comparable to that supported by current 4G systems (e.g. LTE). Furthermore, a radio frame structure is proposed to support the user plane end-to-end latency of 1 ms.", "This paper proposes a novel frame structure for the radio access interface of the next generation of mobile networks. The proposed frame structure has been designed to support multiuser spatial multiplexing, short latencies on the radio access interface, as well as mobility and small packet transmissions. The focus is on ultra dense small cell networks deployed in outdoor environments. This paper also highlights the various prospects and constraints of the proposed dense outdoor system in comparison with alternative system designs. Numerical results are included and a comparison to the Long Term Evolution (LTE) system is provided. Results show that the proposed radio frame structure leads to an improvement of the area spectral efficiency by a factor of ~2.4 as well as a reduction of the average air interface latency by a factor of 5, thus remaining shorter than 1 millisecond.", "This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability ? isclosely approximated by C - ?(V n) Q-1(?) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion , and Q is the complementary Gaussian cumulative distribution function.", "Polar codes represent an emerging class of error-correcting codes with power to approach the capacity of a discrete memoryless channel. This overview article aims to illustrate its principle, generation and decoding techniques. Unlike the traditional capacity-approaching coding strategy that tries to make codes as random as possible, the polar codes follow a different philosophy, also originated by Shannon, by creating a jointly typical set. Channel polarization, a concept central to polar codes, is intuitively elaborated by a Matthew effect in the digital world, followed by a detailed overview of construction methods for polar encoding. The butterfly structure of polar codes introduces correlation among source bits, justifying the use of the SC algorithm for efficient decoding. The SC decoding technique is investigated from the conceptual and practical viewpoints. State-of-the-art decoding algorithms, such as the BP and some generalized SC decoding, are also explained in a broad framework. Simulation results show that the performance of polar codes concatenated with CRC codes can outperform that of turbo or LDPC codes. Some promising research directions in practical scenarios are also discussed in the end." ] }
1703.09575
2954977374
In this paper, we propose a framework for cross-layer optimization to ensure ultra-high reliability and ultra-low latency in radio access networks, where both transmission delay and queueing delay are considered. With short transmission time, the blocklength of channel codes is finite, and the Shannon Capacity cannot be used to characterize the maximal achievable rate with given transmission error probability. With randomly arrived packets, some packets may violate the queueing delay. Moreover, since the queueing delay is shorter than the channel coherence time in typical scenarios, the required transmit power to guarantee the queueing delay and transmission error probability will become unbounded even with spatial diversity. To ensure the required quality-of-service (QoS) with finite transmit power, a proactive packet dropping mechanism is introduced. Then, the overall packet loss probability includes transmission error probability, queueing delay violation probability, and packet dropping probability. We optimize the packet dropping policy, power allocation policy, and bandwidth allocation policy to minimize the transmit power under the QoS constraint. The optimal solution is obtained, which depends on both channel and queue state information. Simulation and numerical results validate our analysis, and show that setting packet loss probabilities equal is a near optimal solution.
Exploiting diversity among multiple links has long been used as an effective way to improve the successful transmission probability in wireless communications. To support the high reliability over fading channels, various diversity techniques have been investigated, say frequency diversity and macroscopic diversity in single antenna systems @cite_44 @cite_36 and spatial diversity in multi-antenna systems @cite_31 . Simulation results using practical modulation and coding schemes in @cite_25 @cite_18 show that the required transmit power to ensure given transmission delay and reliability can be rapidly reduced when the number of antennas at a BS increases.
{ "cite_N": [ "@cite_18", "@cite_36", "@cite_44", "@cite_31", "@cite_25" ], "mid": [ "1603438237", "2188576571", "2087910933", "2289062498", "1505958606" ], "abstract": [ "Fifth generation wireless networks are currently being developed to handle a wide range of new use cases. One important emerging area is ultra-reliable communication with guaranteed low latencies well beyond what current wireless technologies can provide. In this paper, we explore the viability of using wireless communication for low-latency, high-reliability communication in an example scenario of factory automation, and outline important design choices for such a system. We show that it is possible to achieve very low error rates and latencies over a radio channel, also when considering fast fading signal and interference, channel estimation errors, and antenna correlation. The most important tool to ensure high reliability is diversity, and low latency is achieved by using short transmission intervals without retransmissions, which, however, introduces a natural restriction on coverage area.", "Revolutionary use cases for 5G, e.g., autonomous traffic or industrial automation, confront wireless network engineering with unprecedented challenges in terms of throughput, latency, and resilience. Especially, high resilience requires solutions that offer outage probabilities around 10−6 or less, which is close to carrier-grade qualities but far below what is currently possible in 3G and 4G networks. In this context, multi-connectivity is understood as a promising architecture for achieving such high resilience in 5G. In this paper, we analyze an elementary multi-connectivity solution, which utilizes macro-as well as microdiversity, and evaluate trade-offs between power consumption, link usage, and outage probability. To elaborate, we consider exponential path loss, log-normal shadowing, shadowing cross-correlation, and Nakagami-m small scale fading, and derive analytical models for the outage probability. An evaluation of the multi-connectivity system in a hexagonal cellular deployment reveals that optimal operating points with respect to the number of links and resources exist. Moreover, typical 5G aspects, e.g., frequent line of sight in dense networks and multiple antenna branches, are shown to have a beneficial impact (fewer links needed, more power saved) on ideal operating points and overall utility of multi-connectivity.", "Future cellular networks have to meet enormous, unprecedented, and multifaceted requirements, such as high availability and low latency, in order to provide service to new applications in, e.g., vehicular communication, smart grids, and industrial automation. Such applications often demand a temporal availability of six nines or higher. In this work, we investigate how high availability can be achieved in wireless networks. To elaborate, we focus on the joint availability of power-controlled Rayleigh-fading links while using selection combining. By applying a basic availability model for uncorrelated links, we determine whether it is more efficient in terms of power to utilize multiple links in parallel rather than boosting the power of a stand-alone link. The results reveal that, for high availability, it can actually be beneficial to use multiple links in parallel. For instance, an availability of 1 − 10 12 is achieved with 100 dB less power when power is shared among multiple links. Depending upon the availability desired, an optimal number of parallel links in terms of power consumption exists. Additionally, we extend the availability model to correlated links and investigate the performance degradation due to correlation.", "Ultra-reliable communications over wireless will open the possibility for a wide range of novel use cases and applications. In cellular networks, achieving reliable communication is challenging due to many factors, particularly the fading of the desired signal and the interference. In this regard, we investigate the potential of several techniques to combat these main threats. The analysis shows that traditional microscopic multiple-input multiple-output schemes with 2x2 or 4x4 antenna configurations are not enough to fulfil stringent reliability requirements. It is revealed how such antenna schemes must be complemented with macroscopic diversity as well as interference management techniques in order to ensure the necessary SINR outage performance. Based on the obtained performance results, it is discussed which of the feasible options fulfilling the ultra-reliable criteria are most promising in a practical setting, as well as pointers to supplementary techniques that should be included in future studies.", "The fifth generation (5G) of cellular networks is starting to be defined to meet the wireless connectivity demands for 2020 and beyond. One area that is considered increasingly important is the capability to provide ultra-reliable and low-latency communication, to enable e.g., new mission-critical machine-type communication use cases. One such example with extremely demanding requirements is the industrial automation with a need for ultra-low latency with a high degree of determinism. In this paper, we discuss the feasibility, requirements and design challenges of an OFDM based 5G radio interface that is suitable for mission-critical MTC. The discussion is further accompanied with system-level performance evaluations that are carried out for a factory hall-wide automation scenario with two different floor layouts." ] }
1703.09575
2954977374
In this paper, we propose a framework for cross-layer optimization to ensure ultra-high reliability and ultra-low latency in radio access networks, where both transmission delay and queueing delay are considered. With short transmission time, the blocklength of channel codes is finite, and the Shannon Capacity cannot be used to characterize the maximal achievable rate with given transmission error probability. With randomly arrived packets, some packets may violate the queueing delay. Moreover, since the queueing delay is shorter than the channel coherence time in typical scenarios, the required transmit power to guarantee the queueing delay and transmission error probability will become unbounded even with spatial diversity. To ensure the required quality-of-service (QoS) with finite transmit power, a proactive packet dropping mechanism is introduced. Then, the overall packet loss probability includes transmission error probability, queueing delay violation probability, and packet dropping probability. We optimize the packet dropping policy, power allocation policy, and bandwidth allocation policy to minimize the transmit power under the QoS constraint. The optimal solution is obtained, which depends on both channel and queue state information. Simulation and numerical results validate our analysis, and show that setting packet loss probabilities equal is a near optimal solution.
Based on the achievable rate of a single antenna system with finite blocklength channel codes derived in @cite_26 , queueing delay length was analyzed in @cite_20 @cite_29 . For applications with medium delay and reliability requirements, the throughput subject to statistical queueing constraints was studied in @cite_20 , where the effective capacity was derived by using the achievable rate with finite blocklength channel codes, and an automatic repeat-request (ARQ) mechanism was employed to improve reliability. An energy-efficient packet scheduling policy was optimized in @cite_29 to ensure a strict deadline by assuming packet arrival time and instantaneous channel gains known , while the deadline violation probability under the transmit power constraint was not studied.
{ "cite_N": [ "@cite_29", "@cite_26", "@cite_20" ], "mid": [ "", "2106864314", "2102261801" ], "abstract": [ "", "This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability ? isclosely approximated by C - ?(V n) Q-1(?) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion , and Q is the complementary Gaussian cumulative distribution function.", "In this paper, a single point-to-point wireless link operating under queueing constraints in the form of limitations on the buffer violation probabilities is considered. The achievable throughput under such constraints is captured by the effective capacity formulation. It is assumed that finite blocklength codes are employed for transmission. Under this assumption, a recent result on the channel coding rate in the finite blocklength regime is incorporated into the analysis, and the throughput achieved with such codes in the presence of queueing constraints and decoding errors is identified. The performance of different transmission strategies (e.g., variable-rate, variable-power, and fixed-rate transmissions) is studied. Interactions and tradeoffs between the throughput, queueing constraints, coding blocklength, decoding error probabilities, and signal-to-noise ratio are investigated, and several conclusions with important practical implications are drawn." ] }
1703.09779
2516918313
Deep Neural Networks are becoming the de-facto standard models for image understanding, and more generally for computer vision tasks. As they involve highly parallelizable computations, Convolutional Neural Networks (CNNs) are well suited to current fine grain programmable logic devices. Thus, multiple CNN accelerators have been successfully implemented on Field-Programmable Gate Arrays (FPGAs). Unfortunately, FPGA resources such as logic elements or Digital Signal Processing (DSP) units remain limited. This work presents a holistic method relying on approximate computing and design space exploration to optimize the DSP block utilization of a CNN implementation on FPGA. This method was tested when implementing a reconfigurable Optical Character Recognition (OCR) convolutional neural network on an Altera Stratix V device and varying both data representation and CNN topology in order to find the best combination in terms of DSP block utilization and classification accuracy. This exploration generated dataflow architectures of 76 CNN topologies with 5 different fixed point representation. Most efficient implementation performs 883 classifications sec at 256 × 256 resolution using 8 of the available DSP blocks.
First attempt was in 1996 with vip @cite_3 : An fpga based simd processor for image processing and neural networks. However, since fpga in that times were very constrained in terms of resources and logic elements, vip performance was quite limited.
{ "cite_N": [ "@cite_3" ], "mid": [ "2164701340" ], "abstract": [ "The present in this paper the architecture and implementation of the Virtual Image Processor (VIP) which is an SIMD multiprocessor build with large FPGAs. The SIMD architecture, together with a 2D torus connection topology, is well suited for image processing, pattern recognition and neural network algorithms. The VIP board can be programmed on-line at the logic level, allowing optimal hardware dedication to any given algorithm." ] }
1703.09779
2516918313
Deep Neural Networks are becoming the de-facto standard models for image understanding, and more generally for computer vision tasks. As they involve highly parallelizable computations, Convolutional Neural Networks (CNNs) are well suited to current fine grain programmable logic devices. Thus, multiple CNN accelerators have been successfully implemented on Field-Programmable Gate Arrays (FPGAs). Unfortunately, FPGA resources such as logic elements or Digital Signal Processing (DSP) units remain limited. This work presents a holistic method relying on approximate computing and design space exploration to optimize the DSP block utilization of a CNN implementation on FPGA. This method was tested when implementing a reconfigurable Optical Character Recognition (OCR) convolutional neural network on an Altera Stratix V device and varying both data representation and CNN topology in order to find the best combination in terms of DSP block utilization and classification accuracy. This exploration generated dataflow architectures of 76 CNN topologies with 5 different fixed point representation. Most efficient implementation performs 883 classifications sec at 256 × 256 resolution using 8 of the available DSP blocks.
Nowadays, fpga embed much more logic elements and hundreds of hardwired MAC operators ( dsp Blocks). State-of-the-art takes advantage of this improvement in order to implement an efficient feed-forward propagation of a cnn . Based on @cite_20 , and to our knowledge, best state-of-the-art performance for feed forward cnn acceleration on an fpga was achieved by Ovtcharov in @cite_7 , with a reported classification throughput of 134 images second on ImageNet 1K @cite_8 . Such a system was implemented on an a Stratix V D5 device and outperformed most of state-of-the-art implementations such @cite_18 @cite_5 @cite_4 . Most of theses designs are fpga based accelerators with a relatively similar architecture of parallel processing elements associated with soft-cores or embedded hardware processors running a software layer.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_8", "@cite_5", "@cite_20" ], "mid": [ "2009832130", "1990315422", "2272300165", "2618530766", "2117696986", "" ], "abstract": [ "Convolutional neural networks (CNN) applications range from recognition and reasoning (such as handwriting recognition, facial expression recognition and video surveillance) to intelligent text applications such as semantic text analysis and natural language processing applications. Two key observations drive the design of a new architecture for CNN. First, CNN workloads exhibit a widely varying mix of three types of parallelism: parallelism within a convolution operation, intra-output parallelism where multiple input sources (features) are combined to create a single output, and inter-output parallelism where multiple, independent outputs (features) are computed simultaneously. Workloads differ significantly across different CNN applications, and across different layers of a CNN. Second, the number of processing elements in an architecture continues to scale (as per Moore's law) much faster than off-chip memory bandwidth (or pin-count) of chips. Based on these two observations, we show that for a given number of processing elements and off-chip memory bandwidth, a new CNN hardware architecture that dynamically configures the hardware on-the-fly to match the specific mix of parallelism in a given workload gives the best throughput performance. Our CNN compiler automatically translates high abstraction network specification into a parallel microprogram (a sequence of low-level VLIW instructions) that is mapped, scheduled and executed by the coprocessor. Compared to a 2.3 GHz quad-core, dual socket Intel Xeon, 1.35 GHz C870 GPU, and a 200 MHz FPGA implementation, our 120 MHz dynamically configurable architecture is 4x to 8x faster. This is the first CNN architecture to achieve real-time video stream processing (25 to 30 frames per second) on a wide range of object detection and recognition tasks.", "Convolutional Networks (ConvNets) are biologicallyinspired hierarchical architectures that can be trained to perform a variety of detection, recognition and segmentation tasks. ConvNets have a feed-forward architecture consisting of multiple linear convolution filters interspersed with pointwise non-linear squashing functions. This paper presents an efficient implementation of ConvNets on a low-end DSPoriented Field Programmable Gate Array (FPGA). The implementation exploits the inherent parallelism of ConvNets and takes full advantage of multiple hardware multiplyaccumulate units on the FPGA. The entire system uses a single FPGA with an external memory module, and no extra parts. A network compiler software was implemented, which takes a description of a trained ConvNet and compiles it into a sequence of instructions for the ConvNet Processor (CNP). A ConvNet face detection system was implemented and tested. Face detection on a 512 × 384 frame takes 100ms (10 frames per second), which corresponds to an average performance of 3.4×109 connections per second for this 340 million connection network. The design can be used for low-power, lightweight embedded vision systems for micro-UAVs and other small robots.", "Recent breakthroughs in the development of multi-layer convolutional neural networks have led to stateof-the-art improvements in the accuracy of non-trivial recognition tasks such as large-category image classification and automatic speech recognition [1]. These many-layered neural networks are large, complex, and require substantial computing resources to train and evaluate [2]. Unfortunately, these demands come at an inopportune moment due to the recent slowing of gains in commodity processor performance. Hardware specialization in the form of GPGPUs, FPGAs, and ASICs offers a promising path towards major leaps in processing capability while achieving high energy efficiency. To harness specialization, an effort is underway at Microsoft to accelerate Deep Convolutional Neural Networks (CNN) using servers augmented with FPGAs—similar to the hardware that is being integrated into some of Microsoft’s datacenters [3]. Initial efforts to implement a single-node CNN accelerator on a mid-range FPGA show significant promise, resulting in respectable performance relative to prior FPGA designs and high-end GPGPUs, at a fraction of the power. In the future, combining multiple FPGAs over a low-latency communication fabric offers further opportunity to train and evaluate models of unprecedented size and quality. Background State-of-the-art deep convolutional neural networks are typically organized into alternating convolutional and max-pooling neural network layers followed by a number of dense, fully-connected layers—as illustrated in the well-known topology by in Figure 1 [1]. Each 3D volume represents an input to a layer, and is transformed into a new 3D volume feeding the subsequent layer. In the example below, there are five convolutional layers, three max-pooling layers, and three fully-connected layers. Figure 1. Example of Deep Convolutional Neural Network for Image Classification. Image source: [1]. 1 General Purpose Computing on Graphics Processing Units, Field Programmable Gate Arrays, ApplicationSpecific Integrated Circuits.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "In the near future, cameras will be used everywhere as flexible sensors for numerous applications. For mobility and privacy reasons, the required image processing should be local on embedded computer platforms with performance requirements and energy constraints. Dedicated acceleration of Convolutional Neural Networks (CNN) can achieve these targets with enough flexibility to perform multiple vision tasks. A challenging problem for the design of efficient accelerators is the limited amount of external memory bandwidth. We show that the effects of the memory bottleneck can be reduced by a flexible memory hierarchy that supports the complex data access patterns in CNN workload. The efficiency of the on-chip memories is maximized by our scheduler that uses tiling to optimize for data locality. Our design flow ensures that on-chip memory size is minimized, which reduces area and energy usage. The design flow is evaluated by a High Level Synthesis implementation on a Virtex 6 FPGA board. Compared to accelerators with standard scratchpad memories the FPGA resources can be reduced up to 13× while maintaining the same performance. Alternatively, when the same amount of FPGA resources is used our accelerators are up to 11× faster.", "" ] }
1703.09779
2516918313
Deep Neural Networks are becoming the de-facto standard models for image understanding, and more generally for computer vision tasks. As they involve highly parallelizable computations, Convolutional Neural Networks (CNNs) are well suited to current fine grain programmable logic devices. Thus, multiple CNN accelerators have been successfully implemented on Field-Programmable Gate Arrays (FPGAs). Unfortunately, FPGA resources such as logic elements or Digital Signal Processing (DSP) units remain limited. This work presents a holistic method relying on approximate computing and design space exploration to optimize the DSP block utilization of a CNN implementation on FPGA. This method was tested when implementing a reconfigurable Optical Character Recognition (OCR) convolutional neural network on an Altera Stratix V device and varying both data representation and CNN topology in order to find the best combination in terms of DSP block utilization and classification accuracy. This exploration generated dataflow architectures of 76 CNN topologies with 5 different fixed point representation. Most efficient implementation performs 883 classifications sec at 256 × 256 resolution using 8 of the available DSP blocks.
In @cite_19 , an analytical design scheme using the roofline model and loop tiling is used to propose an implementation where the attainable computation roof of the FPGA is reached. This loop tilling optimization is performed on a C code then implemented in floating point on a Virtex 7 485T using Vivaldo HLS Tool. Our approach is different as it generates a purely dataflow architecture where topologies and fixed-point representations are explored.
{ "cite_N": [ "@cite_19" ], "mid": [ "2094756095" ], "abstract": [ "Convolutional neural network (CNN) has been widely employed for image recognition because it can achieve high accuracy by emulating behavior of optic nerves in living creatures. Recently, rapid growth of modern applications based on deep learning algorithms has further improved research and implementations. Especially, various accelerators for deep CNN have been proposed based on FPGA platform because it has advantages of high performance, reconfigurability, and fast development round, etc. Although current FPGA accelerators have demonstrated better performance over generic processors, the accelerator design space has not been well exploited. One critical problem is that the computation throughput may not well match the memory bandwidth provided an FPGA platform. Consequently, existing approaches cannot achieve best performance due to under-utilization of either logic resource or memory bandwidth. At the same time, the increasing complexity and scalability of deep learning applications aggravate this problem. In order to overcome this problem, we propose an analytical design scheme using the roofline model. For any solution of a CNN design, we quantitatively analyze its computing throughput and required memory bandwidth using various optimization techniques, such as loop tiling and transformation. Then, with the help of rooine model, we can identify the solution with best performance and lowest FPGA resource requirement. As a case study, we implement a CNN accelerator on a VC707 FPGA board and compare it to previous approaches. Our implementation achieves a peak performance of 61.62 GFLOPS under 100MHz working frequency, which outperform previous approaches significantly." ] }
1703.09856
2949196542
This paper introduces a new approach to automatically quantify the severity of knee OA using X-ray images. Automatically quantifying knee OA severity involves two steps: first, automatically localizing the knee joints; next, classifying the localized knee joint images. We introduce a new approach to automatically detect the knee joints using a fully convolutional neural network (FCN). We train convolutional neural networks (CNN) from scratch to automatically quantify the knee OA severity optimizing a weighted ratio of two loss functions: categorical cross-entropy and mean-squared loss. This joint training further improves the overall quantification of knee OA severity, with the added benefit of naturally producing simultaneous multi-class classification and regression outputs. Two public datasets are used to evaluate our approach, the Osteoarthritis Initiative (OAI) and the Multicenter Osteoarthritis Study (MOST), with extremely promising results that outperform existing approaches.
Recently, convolutional neural networks (CNNs) have outperformed many methods based on hand-crafted features and they are highly successful in many computer vision tasks such as image recognition, automatic detection and segmentation, content based image retrieval, and video classification. CNNs learn effective feature representations particularly well-suited for fine-grained classification @cite_9 like classification of knee OA images. In our previous study @cite_1 , we showed that the off-the-shelf CNNs such as the VGG 16-Layers network @cite_8 , the VGG-M-128 network @cite_12 , and the BVLC reference CaffeNet @cite_5 @cite_18 trained on ImageNet LSVRC dataset @cite_14 can be fine-tuned for classifying knee OA images through transfer learning. We also argued that it is appropriate to assess knee OA severity using a continuous metric like mean-squared error instead of binary or multi-class classification accuracy, and showed that predicting the continuous grades through regression reduces the mean-squared error and in turn improves the overall quantification.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_9", "@cite_1", "@cite_5", "@cite_12" ], "mid": [ "", "2117539524", "1686810756", "", "2521048164", "2155893237", "1994002998" ], "abstract": [ "", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "", "This paper proposes a new approach to automatically quantify the severity of knee osteoarthritis (OA) from radiographs using deep convolutional neural networks (CNN). Clinically, knee OA severity is assessed using Kellgren & Lawrence (KL) grades, a five point scale. Previous work on automatically predicting KL grades from radiograph images were based on training shallow classifiers using a variety of hand engineered features. We demonstrate that classification accuracy can be significantly improved using deep convolutional neural network models pre-trained on ImageNet and fine-tuned on knee OA images. Furthermore, we argue that it is more appropriate to assess the accuracy of automatic knee OA severity predictions using a continuous distance-based evaluation metric like mean squared error than it is to use classification accuracy. This leads to the formulation of the prediction of KL grades as a regression problem and further improves accuracy. Results on a dataset of X-ray images and KL grades from the Osteoarthritis Initiative (OAI) show a sizable improvement over the current state-of-the-art.", "Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU (approx 2 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.", "The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available." ] }
1703.09856
2949196542
This paper introduces a new approach to automatically quantify the severity of knee OA using X-ray images. Automatically quantifying knee OA severity involves two steps: first, automatically localizing the knee joints; next, classifying the localized knee joint images. We introduce a new approach to automatically detect the knee joints using a fully convolutional neural network (FCN). We train convolutional neural networks (CNN) from scratch to automatically quantify the knee OA severity optimizing a weighted ratio of two loss functions: categorical cross-entropy and mean-squared loss. This joint training further improves the overall quantification of knee OA severity, with the added benefit of naturally producing simultaneous multi-class classification and regression outputs. Two public datasets are used to evaluate our approach, the Osteoarthritis Initiative (OAI) and the Multicenter Osteoarthritis Study (MOST), with extremely promising results that outperform existing approaches.
Previously, Shamir et. al. @cite_11 proposed template matching to automatically detect and extract the knee joints. This method is slow for large datasets such as OAI, and the accuracy and precision of detecting knee joints is low. In our previous study, we introduced an SVM-based method for automatically detecting the center of knee joints @cite_1 and extract a fixed region with reference to the detected center as the ROI. This method is also not highly accurate and there is a compromise in the aspect ratio of the extracted knee joints that affects the overall quantification.
{ "cite_N": [ "@cite_1", "@cite_11" ], "mid": [ "2521048164", "2159988341" ], "abstract": [ "This paper proposes a new approach to automatically quantify the severity of knee osteoarthritis (OA) from radiographs using deep convolutional neural networks (CNN). Clinically, knee OA severity is assessed using Kellgren & Lawrence (KL) grades, a five point scale. Previous work on automatically predicting KL grades from radiograph images were based on training shallow classifiers using a variety of hand engineered features. We demonstrate that classification accuracy can be significantly improved using deep convolutional neural network models pre-trained on ImageNet and fine-tuned on knee OA images. Furthermore, we argue that it is more appropriate to assess the accuracy of automatic knee OA severity predictions using a continuous distance-based evaluation metric like mean squared error than it is to use classification accuracy. This leads to the formulation of the prediction of KL grades as a regression problem and further improves accuracy. Results on a dataset of X-ray images and KL grades from the Osteoarthritis Initiative (OAI) show a sizable improvement over the current state-of-the-art.", "Summary Objective To determine whether computer-based analysis can detect features predictive of osteoarthritis (OA) development in radiographically normal knees. Method A systematic computer-aided image analysis method weighted neighbor distances using a compound hierarchy of algorithms representing morphology (WND-CHARM) was used to analyze pairs of weight-bearing knee X-rays. Initial X-rays were all scored as normal Kellgren–Lawrence (KL) grade 0, and on follow-up approximately 20 years later either developed OA (defined as KL grade=2) or remained normal. Results The computer-aided method predicted whether a knee would change from KL grade 0 to grade 3 with 72 accuracy ( P P Conclusion Radiographic features detectable using a computer-aided image analysis method can predict the future development of radiographic knee OA." ] }
1703.09145
2953206417
Large-scale variations still pose a challenge in unconstrained face detection. To the best of our knowledge, no current face detection algorithm can detect a face as large as 800 x 800 pixels while simultaneously detecting another one as small as 8 x 8 pixels within a single image with equally high accuracy. We propose a two-stage cascaded face detection framework, Multi-Path Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a deep neural network with a classic learning strategy, to tackle this challenge. The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes faces at three different scales. It simultaneously utilizes three parallel outputs of the convolutional feature maps to predict multi-scale candidate face regions. The "atrous" convolution trick (convolution with up-sampled filters) and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN to further boost its performance. The second stage is a Boosted Forests classifier, which utilizes deep facial features pooled from inside the candidate face regions as well as deep contextual features pooled from a larger region surrounding the candidate face regions. This step is included to further remove hard negative samples. Experiments show that this approach achieves state-of-the-art face detection performance on the WIDER FACE dataset "hard" partition, outperforming the former best result by 9.6 for the Average Precision.
There are two established sets of methods for face detection, one based on deformable part models @cite_24 @cite_18 and the other on rigid templates @cite_9 @cite_5 @cite_21 @cite_10 . Prior to the resurgence of Convolutional Neural Networks (CNN) @cite_15 , both sets of methods relied on a combination of hand-crafted'' feature extractors to select facial features and classic learning methods to perform binary feature classification. Admittedly, the performance of these face detectors has been increasingly improved by the use of more complex features @cite_5 @cite_21 @cite_16 or better training strategies @cite_18 @cite_9 @cite_20 . Nevertheless, using hand-crafted'' features and classic classifiers has stymied the development of seamlessly connecting feature selection and classification in a single computational process. In general, they require that many hyper-parameters be heuristically set. For example, both @cite_20 and @cite_16 needed to divide the training data into several partitions according to face poses and train a separate model for each partition.
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_21", "@cite_24", "@cite_5", "@cite_15", "@cite_16", "@cite_10", "@cite_20" ], "mid": [ "2047508432", "204612701", "", "2034025266", "1994215930", "", "", "", "2041497292" ], "abstract": [ "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "We present a new state-of-the-art approach for face detection. The key idea is to combine face alignment with detection, observing that aligned face shapes provide better features for face classification. To make this combination more effective, our approach learns the two tasks jointly in the same cascade framework, by exploiting recent advances in face alignment. Such joint learning greatly enhances the capability of cascade detection and still retains its realtime performance. Extensive experiments show that our approach achieves the best accuracy on challenging datasets, where all existing solutions are either inaccurate or too slow.", "", "Despite the successes in the last two decades, the state-of-the-art face detectors still have problems in dealing with images in the wild due to large appearance variations. Instead of leaving appearance variations directly to statistical learning algorithms, we propose a hierarchical part based structural model to explicitly capture them. The model enables part subtype option to handle local appearance variations such as closed and open month, and part deformation to capture the global appearance variations such as pose and expression. In detection, candidate window is fitted to the structural model to infer the part location and part subtype, and detection score is then computed based on the fitted configuration. In this way, the influence of appearance variation is reduced. Besides the face model, we exploit the co-occurrence between face and body, which helps to handle large variations, such as heavy occlusions, to further boost the face detection performance. We present a phrase based representation for body detection, and propose a structural context model to jointly encode the outputs of face detector and body detector. Benefit from the rich structural face and body information, as well as the discriminative structural learning algorithm, our method achieves state-of-the-art performance on FDDB, AFW and a self-annotated dataset, under wide comparisons with commercial and academic methods. (C) 2013 Elsevier B.V. All rights reserved.", "We present a novel boosting cascade based face detection framework using SURF features. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by two key contributions. First, the proposed framework deals with only several hundreds of multidimensional local SURF patches instead of hundreds of thousands of single dimensional haar features in the VJ framework. Second, it takes AUC as a single criterion for the convergence test of each cascade stage rather than the two conflicting criteria (false-positive-rate and detection-rate) in the VJ framework. These modifications yield much faster training convergence and much fewer stages in the final cascade. We made experiments on training face detector from large scale database. Results shows that the proposed method is able to train face detectors within one hour through scanning billions of negative samples on current personal computers. Furthermore, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed.", "", "", "", "Face detection has drawn much attention in recent decades since the seminal work by Viola and Jones. While many subsequences have improved the work with more powerful learning algorithms, the feature representation used for face detection still can’t meet the demand for effectively and efficiently handling faces with large appearance variance in the wild. To solve this bottleneck, we borrow the concept of channel features to the face detection domain, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form. We adopt a novel variant called aggregate channel features, make a full exploration of feature design, and discover a multiscale version of features with better performance. To deal with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detection adjustment. Following the learning pipelines in ViolaJones framework, the multi-view face detector using aggregate channel features surpasses current state-of-the-art detectors on AFW and FDDB testsets, while runs at 42 FPS" ] }
1703.09145
2953206417
Large-scale variations still pose a challenge in unconstrained face detection. To the best of our knowledge, no current face detection algorithm can detect a face as large as 800 x 800 pixels while simultaneously detecting another one as small as 8 x 8 pixels within a single image with equally high accuracy. We propose a two-stage cascaded face detection framework, Multi-Path Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a deep neural network with a classic learning strategy, to tackle this challenge. The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes faces at three different scales. It simultaneously utilizes three parallel outputs of the convolutional feature maps to predict multi-scale candidate face regions. The "atrous" convolution trick (convolution with up-sampled filters) and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN to further boost its performance. The second stage is a Boosted Forests classifier, which utilizes deep facial features pooled from inside the candidate face regions as well as deep contextual features pooled from a larger region surrounding the candidate face regions. This step is included to further remove hard negative samples. Experiments show that this approach achieves state-of-the-art face detection performance on the WIDER FACE dataset "hard" partition, outperforming the former best result by 9.6 for the Average Precision.
Deep neural networks, with its seamless concatenation of feature representation and pattern classification, have become the current trend of rigid templates for face detection. @cite_29 proposed a single Convolutional Neural Network (CNN) model based on AlexNet @cite_15 to deal with multi-view face detection. @cite_11 used a cascade of six CNNs for alternative face detection and face bounding box calibration. However, these two methods need to crop face regions and rescale them to specific sizes. This increases the complexity of the training and testing. Thus they are not suitable for efficient unconstrained face detection where faces of different scales coexist in the same image. @cite_0 proposed applying five parallel CNNs to predict five different facial parts, and then evaluate the degree of face likeliness by analyzing the spatial arrangement of facial part responses. The usage of facial parts makes the face detector more robust to partial occlusions, but like DPM based face detectors, this method can only deal with faces of relatively large size.
{ "cite_N": [ "@cite_0", "@cite_29", "@cite_11", "@cite_15" ], "mid": [ "2950557924", "1970456555", "1934410531", "" ], "abstract": [ "In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.", "In this paper we consider the problem of multi-view face detection. While there has been significant research on this problem, current state-of-the-art approaches for this task require annotation of facial landmarks, e.g. TSM [25], or annotation of face poses [28, 22]. They also require training dozens of models to fully capture faces in all orientations, e.g. 22 models in HeadHunter method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method that does not require pose landmark annotation and is able to detect faces in a wide range of orientations using a single model based on deep convolutional neural networks. The proposed method has minimal complexity; unlike other recent deep learning object detection methods [9], it does not require additional components such as segmentation, bounding-box regression, or SVM classifiers. Furthermore, we analyzed scores of the proposed face detector for faces in different orientations and found that 1) the proposed method is able to detect faces from different angles and can handle occlusion to some extent, 2) there seems to be a correlation between distribution of positive examples in the training set and scores of the proposed face detector. The latter suggests that the proposed method's performance can be further improved by using better sampling strategies and more sophisticated data augmentation techniques. Evaluations on popular face detection benchmark datasets show that our single-model face detector algorithm has similar or better performance compared to the previous methods, which are more complex and require annotations of either different poses or facial landmarks.", "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.", "" ] }
1703.09145
2953206417
Large-scale variations still pose a challenge in unconstrained face detection. To the best of our knowledge, no current face detection algorithm can detect a face as large as 800 x 800 pixels while simultaneously detecting another one as small as 8 x 8 pixels within a single image with equally high accuracy. We propose a two-stage cascaded face detection framework, Multi-Path Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a deep neural network with a classic learning strategy, to tackle this challenge. The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes faces at three different scales. It simultaneously utilizes three parallel outputs of the convolutional feature maps to predict multi-scale candidate face regions. The "atrous" convolution trick (convolution with up-sampled filters) and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN to further boost its performance. The second stage is a Boosted Forests classifier, which utilizes deep facial features pooled from inside the candidate face regions as well as deep contextual features pooled from a larger region surrounding the candidate face regions. This step is included to further remove hard negative samples. Experiments show that this approach achieves state-of-the-art face detection performance on the WIDER FACE dataset "hard" partition, outperforming the former best result by 9.6 for the Average Precision.
It has been pointed out @cite_8 that the Region-of-Interest (ROI) pooling layer applied to low-resolution feature maps can lead to plain'' features due to the bins collapsing. We note that this “lost” information will lead to non-discriminative small regions. However, since detecting small-scale faces is one of the main objectives of this paper, we have instead pooled features from lower-level feature maps to reduce information collapsing. For example, we reduce information collapsing by using conv3 and conv4 of VGG16 @cite_7 , which have higher resolution, instead of conv5 of VGG16 @cite_7 used by Faster RCNN @cite_1 and CMS-RCNN @cite_4 . The pooled features are then trained by a Boosted Forest (BF) classifier as is done for pedestrian detection @cite_8 . But unlike @cite_8 , we also pool contextual information in addition to the facial features to further boost detection performance.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_7", "@cite_8" ], "mid": [ "2432917172", "", "1686810756", "2497039038" ], "abstract": [ "Robust face detection in the wild is one of the ultimate components to support various facial related problems, i.e., unconstrained face recognition, facial periocular recognition, facial landmarking and pose estimation, facial expression recognition, 3D facial model construction, etc. Although the face detection problem has been intensely studied for decades with various commercial applications, it still meets problems in some real-world scenarios due to numerous challenges, e.g., heavy facial occlusions, extremely low resolutions, strong illumination, exceptional pose variations, image or video compression artifacts, etc. In this paper, we present a face detection approach named Contextual Multi-Scale Region-based Convolution Neural Network (CMS-RCNN) to robustly solve the problems mentioned above. Similar to the region-based CNNs, our proposed network consists of the region proposal component and the region-of-interest (RoI) detection component. However, far apart of that network, there are two main contributions in our proposed network that play a significant role to achieve the state-of-the-art performance in face detection. First, the multi-scale information is grouped both in region proposal and RoI detection to deal with tiny face regions. Second, our proposed network allows explicit body contextual reasoning in the network inspired from the intuition of human vision system. The proposed approach is benchmarked on two recent challenging face detection databases, i.e., the WIDER FACE Dataset which contains high degree of variability, as well as the Face Detection Dataset and Benchmark (FDDB). The experimental results show that our proposed approach trained on WIDER FACE Dataset outperforms strong baselines on WIDER FACE Dataset by a large margin, and consistently achieves competitive results on FDDB against the recent state-of-the-art face detection methods.", "", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast Faster R-CNN have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available." ] }
1703.09145
2953206417
Large-scale variations still pose a challenge in unconstrained face detection. To the best of our knowledge, no current face detection algorithm can detect a face as large as 800 x 800 pixels while simultaneously detecting another one as small as 8 x 8 pixels within a single image with equally high accuracy. We propose a two-stage cascaded face detection framework, Multi-Path Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a deep neural network with a classic learning strategy, to tackle this challenge. The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes faces at three different scales. It simultaneously utilizes three parallel outputs of the convolutional feature maps to predict multi-scale candidate face regions. The "atrous" convolution trick (convolution with up-sampled filters) and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN to further boost its performance. The second stage is a Boosted Forests classifier, which utilizes deep facial features pooled from inside the candidate face regions as well as deep contextual features pooled from a larger region surrounding the candidate face regions. This step is included to further remove hard negative samples. Experiments show that this approach achieves state-of-the-art face detection performance on the WIDER FACE dataset "hard" partition, outperforming the former best result by 9.6 for the Average Precision.
The proposed MP-RPN shares some similarity with the Single Shot Multibox Detector (SSD) @cite_31 and the Multi-Scale Convolutional Neural Network (MS-CNN) @cite_22 . Both methods use multi-scale feature maps to predict objects of different sizes in parallel. However, our work differs from these in two notable respects. First, we employ a fine-grained path to classify and localize tiny faces (as small as @math pixels). Both SSD and MS-CNN lack such a characteristic since both were proposed to detect general objects, such as cars or tables, which have a much larger minimum size. Second, for medium- and large-scale path, we additionally employ the atrous'' convolution trick (convolution with up-sampled filters) @cite_23 together with the normal convolution to acquire a larger field of view. In this way, we are able to use three paths to cover a large spectrum of face sizes, from @math to @math pixels. By comparison, SSD @cite_31 utilized six paths to cover different object scales, which makes the network much more complex.
{ "cite_N": [ "@cite_31", "@cite_22", "@cite_23" ], "mid": [ "2193145675", "2490270993", "2412782625" ], "abstract": [ "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is proposed for fast multi-scale object detection. The MS-CNN consists of a proposal sub-network and a detection sub-network. In the proposal sub-network, detection is performed at multiple output layers, so that receptive fields match objects of different scales. These complementary scale-specific detectors are combined to produce a strong multi-scale object detector. The unified network is learned end-to-end, by optimizing a multi-task loss. Feature upsampling by deconvolution is also explored, as an alternative to input upsampling, to reduce the memory and computation costs. State-of-the-art object detection performance, at up to 15 fps, is reported on datasets, such as KITTI and Caltech, containing a substantial number of small objects.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online." ] }
1703.09210
2949960002
We propose StyleBank, which is composed of multiple convolution filter banks and each filter bank explicitly represents one style, for neural image style transfer. To transfer an image to a specific style, the corresponding filter bank is operated on top of the intermediate feature embedding produced by a single auto-encoder. The StyleBank and the auto-encoder are jointly learnt, where the learning is conducted in such a way that the auto-encoder does not encode any style information thanks to the flexibility introduced by the explicit filter bank representation. It also enables us to conduct incremental learning to add a new image style by learning a new filter bank while holding the auto-encoder fixed. The explicit style representation along with the flexible network design enables us to fuse styles at not only the image level, but also the region level. Our method is the first style transfer network that links back to traditional texton mapping methods, and hence provides new understanding on neural style transfer. Our method is easy to train, runs in real-time, and produces results that qualitatively better or at least comparable to existing methods.
DeepDream @cite_6 may be the first attempt to generate artistic work using CNN. Inspired by this work, @cite_33 successfully applies CNN (pre-trained VGG-16 networks) to neural style transfer and produces more impressive stylization results compared to classic texture transfer methods. This idea is further extended to portrait painting style transfer @cite_20 and patch-based style transfer by combining Markov Random Field (MRF) and CNN @cite_8 . Unfortunately, these methods based on an iterative optimization mechanism are computationally expensive in run-time, which imposes a big limitation in real applications.
{ "cite_N": [ "@cite_20", "@cite_33", "@cite_6", "@cite_8" ], "mid": [ "2461455396", "1924619199", "", "2952139859" ], "abstract": [ "Head portraits are popular in traditional painting. Automating portrait painting is challenging as the human visual system is sensitive to the slightest irregularities in human faces. Applying generic painting techniques often deforms facial structures. On the other hand portrait painting techniques are mainly designed for the graphite style and or are based on image analogies; an example painting as well as its original unpainted version are required. This limits their domain of applicability. We present a new technique for transferring the painting from a head portrait onto another. Unlike previous work our technique only requires the example painting and is not restricted to a specific style. We impose novel spatial constraints by locally transferring the color distributions of the example painting. This better captures the painting texture and maintains the integrity of facial structures. We generate a solution through Convolutional Neural Networks and we present an extension to video. Here motion is exploited in a way to reduce temporal inconsistencies and the shower-door effect. Our approach transfers the painting style while maintaining the input photograph identity. In addition it significantly reduces facial deformations over state of the art.", "In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.", "", "This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images. The generative MRF acts on higher-levels of a dCNN feature pyramid, controling the image layout at an abstract level. We apply the method to both photographic and non-photo-realistic (artwork) synthesis tasks. The MRF regularizer prevents over-excitation artifacts and reduces implausible feature mixtures common to previous dCNN inversion approaches, permitting synthezing photographic content with increased visual plausibility. Unlike standard MRF-based texture synthesis, the combined system can both match and adapt local features with considerable variability, yielding results far out of reach of classic generative MRF methods." ] }
1703.09210
2949960002
We propose StyleBank, which is composed of multiple convolution filter banks and each filter bank explicitly represents one style, for neural image style transfer. To transfer an image to a specific style, the corresponding filter bank is operated on top of the intermediate feature embedding produced by a single auto-encoder. The StyleBank and the auto-encoder are jointly learnt, where the learning is conducted in such a way that the auto-encoder does not encode any style information thanks to the flexibility introduced by the explicit filter bank representation. It also enables us to conduct incremental learning to add a new image style by learning a new filter bank while holding the auto-encoder fixed. The explicit style representation along with the flexible network design enables us to fuse styles at not only the image level, but also the region level. Our method is the first style transfer network that links back to traditional texton mapping methods, and hence provides new understanding on neural style transfer. Our method is easy to train, runs in real-time, and produces results that qualitatively better or at least comparable to existing methods.
To make the run-time more efficient, more and more works begin to directly learn a feed-forward generator network for a specific style. This way, stylized results can be obtained just with a forward pass, which is hundreds of times faster than iterative optimization @cite_33 . For example, @cite_10 propose a texture network for both texture synthesis and style transfer. @cite_23 define a perceptual loss function to help learn a transfer network that aims to produce results approaching @cite_33 . @cite_38 introduce a Markovian Generative Adversarial Networks, aiming to speed up their previous work @cite_8 .
{ "cite_N": [ "@cite_38", "@cite_33", "@cite_8", "@cite_23", "@cite_10" ], "mid": [ "2951745349", "1924619199", "2952139859", "2950689937", "2952226636" ], "abstract": [ "This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.", "In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.", "This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images. The generative MRF acts on higher-levels of a dCNN feature pyramid, controling the image layout at an abstract level. We apply the method to both photographic and non-photo-realistic (artwork) synthesis tasks. The MRF regularizer prevents over-excitation artifacts and reduces implausible feature mixtures common to previous dCNN inversion approaches, permitting synthezing photographic content with increased visual plausibility. Unlike standard MRF-based texture synthesis, the combined system can both match and adapt local features with considerable variability, yielding results far out of reach of classic generative MRF methods.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions." ] }
1703.09210
2949960002
We propose StyleBank, which is composed of multiple convolution filter banks and each filter bank explicitly represents one style, for neural image style transfer. To transfer an image to a specific style, the corresponding filter bank is operated on top of the intermediate feature embedding produced by a single auto-encoder. The StyleBank and the auto-encoder are jointly learnt, where the learning is conducted in such a way that the auto-encoder does not encode any style information thanks to the flexibility introduced by the explicit filter bank representation. It also enables us to conduct incremental learning to add a new image style by learning a new filter bank while holding the auto-encoder fixed. The explicit style representation along with the flexible network design enables us to fuse styles at not only the image level, but also the region level. Our method is the first style transfer network that links back to traditional texton mapping methods, and hence provides new understanding on neural style transfer. Our method is easy to train, runs in real-time, and produces results that qualitatively better or at least comparable to existing methods.
At the core of our network, the proposed represents each style by a convolution filter bank. It is very analogous to the concept of "texton" @cite_15 @cite_11 @cite_26 and filter bank in @cite_39 @cite_1 , but is defined in feature embedding space produced by auto-encoder @cite_13 rather than image space. As we known, embedding space can provide compact and descriptive representation for original data @cite_24 @cite_30 @cite_5 . Therefore, our would provide a better representation for style data compared to predefined dictionaries (such as wavelet @cite_7 or pyramid @cite_19 ).
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_7", "@cite_1", "@cite_39", "@cite_24", "@cite_19", "@cite_5", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "", "2248685949", "2127006916", "2185897478", "", "2132339004", "", "2470475590", "2098800028", "2100495367", "2128057924" ], "abstract": [ "", "In this paper, we describe a novel technique, called motion texture, for synthesizing complex human-figure motion (e.g., dancing) that is statistically similar to the original motion captured data. We define motion texture as a set of motion textons and their distribution, which characterize the stochastic and dynamic nature of the captured motion. Specifically, a motion texton is modeled by a linear dynamic system (LDS) while the texton distribution is represented by a transition matrix indicating how likely each texton is switched to another. We have designed a maximum likelihood algorithm to learn the motion textons and their relationship from the captured dance motion. The learnt motion texture can then be used to generate new animations automatically and or edit animation sequences interactively. Most interestingly, motion texture can be manipulated at different levels, either by changing the fine details of a specific motion at the texton level or by designing a new choreography at the distribution level. Our approach is demonstrated by many synthesized sequences of visually compelling dance motion.", "We present a universal statistical model for texture images in the context of an overcomplete complex wavelet transform. The model is parameterized by a set of statistics computed on pairs of coefficients corresponding to basis functions at adjacent spatial locations, orientations, and scales. We develop an efficient algorithm for synthesizing random images subject to these constraints, by iteratively projecting onto the set of images satisfying each constraint, and we use this to test the perceptual validity of the model. In particular, we demonstrate the necessity of subgroups of the parameter set by showing examples of texture synthesis that fail when those parameters are removed from the set. We also demonstrate the power of our model by successfully synthesizing examples drawn from a diverse collection of artificial and natural textures.", "The convolutional neural network (ConvNet or CNN) has proven to be very successful in many tasks such as those in computer vision. In this conceptual paper, we study the generative perspective of the discriminative CNN. In particular, we propose to learn the generative FRAME (Filters, Random field, And Maximum Entropy) model using the highly expressive filters pre-learned by the CNN at the convolutional layers. We show that the learning algorithm can generate realistic and rich object and texture patterns in natural scenes. We explain that each learned model corresponds to a new CNN unit at a layer above the layer of filters employed by the model. We further show that it is possible to learn a new layer of CNN units using a generative CNN model, which is a product of experts model, and the learning algorithm admits an EM interpretation with binary latent variables.", "", "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "", "We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose a novel approach that models future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. Future frame synthesis is challenging, as it involves low- and high-level image and motion understanding. We propose a novel network structure, namely a Cross Convolutional Network to aid in synthesizing future frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-wold videos. We also show that our model can be applied to tasks such as visual analogy-making, and present an analysis of the learned network representations.", "The paper makes two contributions: it provides (1) an operational definition of textons, the putative elementary units of texture perception, and (2) an algorithm for partitioning the image into disjoint regions of coherent brightness and texture, where boundaries of regions are defined by peaks in contour orientation energy and differences in texton densities across the contour. B. Julesz (1981) introduced the term texton, analogous to a phoneme in speech recognition, but did not provide an operational definition for gray-level images. We re-invent textons as frequently co-occurring combinations of oriented linear filter outputs. These can be learned using a K-means approach. By mapping each pixel to its nearest texton, the image can be analyzed into texton channels, each of which is a point set where discrete techniques such as Voronoi diagrams become applicable. Local histograms of texton frequencies can be used with a spl chi sup 2 test for significant differences to find texture boundaries. Natural images contain both textured and untextured regions, so we combine this cue with that of the presence of peaks of contour energy derived from outputs of odd- and even-symmetric oriented Gaussian derivative filters. Each of these cues has a domain of applicability, so to facilitate cue combination we introduce a gating operator based on a statistical test for isotropy of Delaunay neighbors. Having obtained a local measure of how likely two nearby pixels are to belong to the same region, we use the spectral graph theoretic framework of normalized cuts to find partitions of the image into regions of coherent texture and brightness. Experimental results on a wide range of images are shown.", "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.", "Textons refer to fundamental micro-structures in natural images (and videos) and are considered as the atoms of pre-attentive human visual perception (Julesz, 1981). Unfortunately, the word \"texton\" remains a vague concept in the literature for lack of a good mathematical model. In this article, we first present a three-level generative image model for learning textons from texture images. In this model, an image is a superposition of a number of image bases selected from an over-complete dictionary including various Gabor and Laplacian of Gaussian functions at various locations, scales, and orientations. These image bases are, in turn, generated by a smaller number of texton elements, selected from a dictionary of textons. By analogy to the waveform-phoneme-word hierarchy in speech, the pixel-base-texton hierarchy presents an increasingly abstract visual description and leads to dimension reduction and variable decoupling. By fitting the generative model to observed images, we can learn the texton dictionary as parameters of the generative model. Then the paper proceeds to study the geometric, dynamic, and photometric structures of the texton representation by further extending the generative model to account for motion and illumination variations. (1) For the geometric structures, a texton consists of a number of image bases with deformable spatial configurations. The geometric structures are learned from static texture images. (2) For the dynamic structures, the motion of a texton is characterized by a Markov chain model in time which sometimes can switch geometric configurations during the movement. We call the moving textons as \"motons\". The dynamic models are learned using the trajectories of the textons inferred from video sequence. (3) For photometric structures, a texton represents the set of images of a 3D surface element under varying illuminations and is called a \"lighton\" in this paper. We adopt an illumination-cone representation where a lighton is a texton triplet. For a given light source, a lighton image is generated as a linear sum of the three texton bases. We present a sequence of experiments for learning the geometric, dynamic, and photometric structures from images and videos, and we also present some comparison studies with K-mean clustering, sparse coding, independent component analysis, and transformed component analysis. We shall discuss how general textons can be learned from generic natural images." ] }
1703.09426
2952595901
In this article we consider a consistent convex feasibility problem in a real Hilbert space defined by a finite family of sets @math . We are interested, in particular, in the case where for each @math , @math , @math is a cutter and @math is a proximity function. Moreover, we make the following assumption: the computation of @math is at most as difficult as the evaluation of @math and this is at most as difficult as projecting onto @math . We study a double-layer fixed point algorithm which applies two types of controls in every iteration step. The first one -- the outer control -- is assumed to be almost cyclic. The second one -- the inner control -- determines the most important sets from those offered by the first one. The selection is made in terms of proximity functions. The convergence results presented in this manuscript depend on the conditions which first, bind together the sets, the operators and the proximity functions and second, connect the inner and outer controls. In particular, weak regularity (demi-closedness principle), bounded regularity and bounded linear regularity imply weak, strong and linear convergence of our algorithm, respectively. The framework presented in this paper covers many known (subgradient) projection algorithms already existing in the literature; for example, those applied with (almost) cyclic, remotest-set, maximum displacement, most-violated constraint and simultaneous controls. In addition, we provide several new examples, where the double-layer approach indeed accelerates the convergence speed as we demonstrate numerically.
We would like to begin with two important general observations regarding the types of convergence one could expect. The first is that in the infinite dimensional setting, in view of Hundal's counterexample, it may happen that even for basic cyclic or parallel projection methods the convergence can only be in the weak topology; see @cite_29 and @cite_34 . Moreover, the result of [Theorem 1.4] BauschkeDeutschHundal2009 shows that norm convergence can be far away from a linear rate. Furthermore, it can be arbitrarily slow. See also @cite_2 in this connection. Thus both norm and linear convergence require some additional assumptions to which we refer in general as bounded regularity and bounded linear regularity.
{ "cite_N": [ "@cite_29", "@cite_34", "@cite_2" ], "mid": [ "", "2100904599", "2962956860" ], "abstract": [ "", "Recently, Hundal has constructed a hyperplane H , a cone K, and a starting point y0 in ‘2 such that the sequence of alternating projections ((PK PH ) n y0)n∈N converges weakly to some point in H ∩ K, but not in norm. We show how this construction results in a counterexample to norm convergence for iterates of averaged projections; hence, we give an a:rmative answer to a question raised by Reich two decades ago. Furthermore, new counterexamples to norm convergence for iterates of <rmly nonexpansive maps (=a la Genel and Lindenstrauss) and for the proximal point algorithm (= al a G@ uler) are provided. We also present a counterexample, along with some weak and norm convergence results, for the new framework of string-averaging projection methods introduced by Extensions to Banach spaces and the situation for the Hilbert ball are discussed as well. ? 2003 Elsevier Ltd. All rights reserved. MSC: 47H09; 47J25; 90C25", "A generalization of the cosine of the Friedrichs angle between two subspaces to a parameter associated to several closed subspaces in a Hilbert space is given. This is used to analyze the rate of convergence in the von Neumann-Halperin method of cyclic alternating projections. General dichotomy theorems are proved, in the Hilbert or Banach space situation, providing conditions under which the alternative QUC ASC (quick uniform convergence versus arbitrarily slow convergence) holds. Several meanings for ASC are proposed." ] }
1703.09218
2604455549
In visual exploration and analysis of data, determining how to select and transform the data for visualization is a challenge for data-unfamiliar or inexperienced users. Our main hypothesis is that for many data sets and common analysis tasks, there are relatively few "data slices" that result in effective visualizations. By focusing human users on appropriate and suitably transformed parts of the underlying data sets, these data slices can help the users carry their task to correct completion. To verify this hypothesis, we develop a framework that permits us to capture exemplary data slices for a user task, and to explore and parse visual-exploration sequences into a format that makes them distinct and easy to compare. We develop a recommendation system, DataSlicer, that matches a "currently viewed" data slice with the most promising "next effective" data slices for the given exploration task. We report the results of controlled experiments with an implementation of the DataSlicer system, using four common analytical task types. The experiments demonstrate statistically significant improvements in accuracy and exploration speed versus users without access to our system.
Significant advances have been made lately in developing various facets of visual solutions for data exploration and analysis. In this space, we focus mainly on projects that concentrate on the problem of finding the right visualization, e.g., @cite_13 @cite_16 @cite_2 @cite_15 . We refer the reader to the survey @cite_14 for a more general discussion of data-exploration techniques.
{ "cite_N": [ "@cite_14", "@cite_2", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2083619093", "2147931936", "1559591183", "", "2294895571" ], "abstract": [ "Data exploration is about efficiently extracting knowledge from data even if we do not know exactly what we are looking for. In this tutorial, we survey recent developments in the emerging area of database systems tailored for data exploration. We discuss new ideas on how to store and access data as well as new ideas on how to interact with a data system to enable users and applications to quickly figure out which data parts are of interest. In addition, we discuss how to exploit lessons-learned from past research, the new challenges data exploration crafts, emerging applications and future research directions.", "Data analysts operating on large volumes of data often rely on visualizations to interpret the results of queries. However, finding the right visualization for a query is a laborious and time-consuming task. We demonstrate SeeDB, a system that partially automates this task: given a query, SeeDB explores the space of all possible visualizations, and automatically identifies and recommends to the analyst those visualizations it finds to be most \"interesting\" or \"useful\". In our demonstration, conference attendees will see SeeDB in action for a variety of queries on multiple real-world datasets.", "Curiosity, a fundamental drive amongst higher living organisms, is what enables exploration, learning and creativity. In our increasingly data-driven world, data exploration, i.e., Making sense of mounting haystacks of data, is akin to intelligence for science, business and individuals. However, modern data systems -- designed for data retrieval rather than exploration -- only let us retrieve data and ask if it is interesting. This makes knowledge discovery a game of hit-and-trial which can only be orchestrated by expert data scientists. We present the vision toward Queriosity, an automated and personalized data exploration system. Designed on the principles of autonomy, learning and usability, Queriosity envisions a paradigm shift in data exploration and aims to become a a personalized \"data robot\" that provides a direct answer to what is interesting in a user's data set, instead of just retrieving data. Queriosity autonomously and continuously navigates toward interesting findings based on trends, statistical properties and interactive user feedback.", "", "± ABSTRACT Traditional DBSMs are suited for applications in which the structure, meaning and contents of the database, as well as the questions to be asked are already well understood. There is, however, a class of applications that we will collectively refer to as Interactive Data Exploration (IDE) applications, in which this is not the case. IDE is a key ingredient of a diverse set of discovery-oriented applications we are dealing with, including ones from scientific computing, financial analysis, evidence-based medicine, and genomics. The need for effective IDE will only increase as data are being collected at an unprecedented rate. IDE is fundamentally a multi-step, non-linear process with imprecise end-goals. For example, data-driven scientific discovery through IDE often requires non-expert users to iteratively interact with the system to make sense of and to identify interesting patterns and relationships in large, amorphous data sets. To make the most of the increasingly available complex and big data sets, users would need an \"expert assistant\" who would be able to effectively and efficiently guide them through the data space. Having a human assistant is not only expensive but also unrealistic. Thus, it is essential that we automate this task. We propose database systems be augmented with an automated \"database navigator\" (DBNav) service that assists as a \"tour guide\" to facilitate IDE. Just like a car navigation system that offers advice on the routes to be taken and display points of interest, DBNav would similarly steer the user towards interesting \"trajectories\" through the data, while highlighting relevant features. Like any good tour guide, DBNav should consider many kinds of information; in particular, it should be sensitive to a user's goals and interests, as well as common navigation patterns that applications exhibit. We sketch a general data navigation framework and discuss some specific components and approaches that we believe belong to any such system." ] }
1703.09218
2604455549
In visual exploration and analysis of data, determining how to select and transform the data for visualization is a challenge for data-unfamiliar or inexperienced users. Our main hypothesis is that for many data sets and common analysis tasks, there are relatively few "data slices" that result in effective visualizations. By focusing human users on appropriate and suitably transformed parts of the underlying data sets, these data slices can help the users carry their task to correct completion. To verify this hypothesis, we develop a framework that permits us to capture exemplary data slices for a user task, and to explore and parse visual-exploration sequences into a format that makes them distinct and easy to compare. We develop a recommendation system, DataSlicer, that matches a "currently viewed" data slice with the most promising "next effective" data slices for the given exploration task. We report the results of controlled experiments with an implementation of the DataSlicer system, using four common analytical task types. The experiments demonstrate statistically significant improvements in accuracy and exploration speed versus users without access to our system.
Finally, a good example of a collaborative tool for visualizing data is AstroShelf @cite_17 . This tool is specifically tailored for astrophysicists and, unlike ours, aims more at facilitating collaborations than recommending visualizations.
{ "cite_N": [ "@cite_17" ], "mid": [ "2058442663" ], "abstract": [ "This demo presents AstroShelf, our on-going effort to enable astrophysicists to collaboratively investigate celestial objects using data originating from multiple sky surveys, hosted at different sites. The AstroShelf platform combines database and data stream, workflow and visualization technologies to provide a means for querying and displaying telescope images (in a Google Sky manner), visualizations of spectrum data, and for managing annotations. In addition to the user interface, AstroShelf supports a programmatic interface (available as a web service), which allows astrophysicists to incorporate functionality from AstroShelf in their own programs. A key feature is Live Annotations which is the detection and delivery of events or annotations to users in real-time, based on their profiles. We demonstrate the capabilities of AstroShelf through real end-user exploration scenarios (with participation from \"stargazers\" in the audience), in the presence of simulated annotation workloads executed through web services." ] }
1703.09307
2952047388
We introduce a community detection algorithm (Fluid Communities) based on the idea of fluids interacting in an environment, expanding and contracting as a result of that interaction. Fluid Communities is based on the propagation methodology, which represents the state-of-the-art in terms of computational cost and scalability. While being highly efficient, Fluid Communities is able to find communities in synthetic graphs with an accuracy close to the current best alternatives. Additionally, Fluid Communities is the first propagation-based algorithm capable of identifying a variable number of communities in network. To illustrate the relevance of the algorithm, we evaluate the diversity of the communities found by Fluid Communities, and find them to be significantly different from the ones found by alternative methods.
The most recent evaluation and comparison of CD algorithms was made by , where the following eight algorithms were compared in terms of Normalized Mutual Information (NMI) and computing time: Edge Betweenness @cite_10 , Fast greedy @cite_14 , Infomap @cite_4 @cite_11 , Label Propagation @cite_8 , Leading Eigenvector @cite_15 , Multilevel ( Louvain) @cite_12 , Spinglass @cite_9 and Walktrap @cite_3 . The performance of these eight algorithms was measured on artificially generated graphs provided by the LFR benchmark @cite_5 , which defines a more realistic setting than the alternative GN benchmark @cite_0 , including scale-free degree and cluster size distributions. One of the main conclusions of this study is that the Multilevel algorithm is the most competitive overall in terms of CD quality.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_9", "@cite_3", "@cite_0", "@cite_5", "@cite_15", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2047940964", "2164998314", "2132202037", "2025543856", "2033590892", "2095293504", "2023655578", "2015953751", "", "2131681506", "2124209874" ], "abstract": [ "The discovery and analysis of community structure in networks is a topic of considerable recent interest within the physics community, but most methods proposed so far are unsuitable for very large networks because of their computational cost. Here we present a hierarchical agglomeration algorithm for detecting community structure which is faster than many competing algorithms: its running time on a network with n vertices and m edges is O(m d log n) where d is the depth of the dendrogram describing the community structure. Many real-world networks are sparse and hierarchical, with m n and d log n, in which case our algorithm runs in essentially linear time, O(n log^2 n). As an example of the application of this algorithm we use it to analyze a network of items for sale on the web-site of a large online retailer, items in the network being linked if they are frequently purchased by the same buyer. The network has more than 400,000 vertices and 2 million edges. We show that our algorithm can extract meaningful communities from this network, revealing large-scale patterns present in the purchasing habits of customers.", "To comprehend the multipartite organization of large-scale biological and social systems, we introduce an information theoretic approach that reveals community structure in weighted and directed networks. We use the probability flow of random walks on a network as a proxy for information flows in the real system and decompose the network into modules by compressing a description of the probability flow. The result is a map that both simplifies and highlights the regularities in the structure and their relationships. We illustrate the method by making a map of scientific communication as captured in the citation patterns of >6,000 journals. We discover a multicentric organization with fields that vary dramatically in size and degree of integration into the network of science. Along the backbone of the network—including physics, chemistry, molecular biology, and medicine—information flows bidirectionally, but the map reveals a directional pattern of citation from the applied fields to the basic sciences.", "Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far.", "Institute for Theoretical Physics, University of Bremen, Otto-Hahn-Allee, D-28359 Bremen, Germany(Dated: February 3, 2008)Starting from a general ansatz, we show how community detection can be interpreted as finding theground state of an infinite range spin glass. Our approach applies to weighted and directed networksalike. It contains the at hoc introduced quality function from [1] and the modularity Q as definedby Newman and Girvan [2] as special cases. The community structure of the network is interpretedas the spin configuration that minimizes the energy of the spin glass with the spin states being thecommunity indices. We elucidate the properties of the ground state configuration to give a concisedefinition of communities as cohesive subgroups in networks that is adaptive to the specific class ofnetwork under study. Further we show, how hierarchies and overlap in the community structure canbe detected. Computationally effective local update rules for optimization procedures to find theground state are given. We show how the ansatz may be used to discover the community around agiven node without detecting all communities in the full network and we give benchmarks for theperformance of this extension. Finally, we give expectation values for the modularity of randomgraphs, which can be used in the assessment of statistical significance of community structure.", "In a representative embodiment of the invention described herein, a well logging system for investigating subsurface formations is controlled by a general purpose computer programmed for real-time operation. The system is cooperatively arranged to provide for all aspects of a well logging operation, such as data acquisition and processing, tool control, information or data storage, and data presentation as a well logging tool is moved through a wellbore. The computer controlling the system is programmed to provide for data acquisition and tool control commands in direct response to asynchronous real-time external events. Such real-time external events may occur, for example, as a result of movement of the logging tool over a selected depth interval, or in response to requests or commands directed to the system by the well logging engineer by means of keyboard input.", "We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible \"betweenness\" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems.", "Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e., the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this benchmark to test two popular methods of community detection, modularity optimization, and Potts model clustering. The results show that the benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis.", "We consider the problem of detecting communities or modules in networks, groups of vertices with a higher-than-average density of edges connecting them. Previous work indicates that a robust approach to this problem is the maximization of the benefit function known as \"modularity\" over possible divisions of a network. Here we show that this maximization process can be written in terms of the eigenspectrum of a matrix we call the modularity matrix, which plays a role in community detection similar to that played by the graph Laplacian in graph partitioning calculations. This result leads us to a number of possible algorithms for detecting community structure, as well as several other results, including a spectral measure of bipartite structure in networks and a new centrality measure that identifies those vertices that occupy central positions within the communities to which they belong. The algorithms and measures proposed are illustrated with applications to a variety of real-world complex networks.", "", "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks.", "Many real-world networks are so large that we must simplify their structure before we can extract useful information about the systems they represent. As the tools for doing these simplifications proliferate within the network literature, researchers would benefit from some guidelines about which of the so-called community detection algorithms are most appropriate for the structures they are studying and the questions they are asking. Here we show that different methods highlight different aspects of a network's structure and that the the sort of information that we seek to extract about the system must guide us in our decision. For example, many community detection algorithms, including the popular modularity maximization approach, infer module assignments from an underlying model of the network formation process. However, we are not always as interested in how a system's network structure was formed, as we are in how a network's extant structure influences the system's behavior. To see how structure influences current behavior, we will recognize that links in a network induce movement across the network and result in system-wide interdependence. In doing so, we explicitly acknowledge that most networks carry flow. To highlight and simplify the network structure with respect to this flow, we use the map equation. We present an intuitive derivation of this flow-based and information-theoretic method and provide an interactive on-line application that anyone can use to explore the mechanics of the map equation. The differences between the map equation and the modularity maximization approach are not merely conceptual. Because the map equation attends to patterns of flow on the network and the modularity maximization approach does not, the two methods can yield dramatically different results for some network structures. To illustrate this and build our understanding of each method, we partition several sample networks. We also describe an algorithm and provide source code to efficiently decompose large weighted and directed networks based on the map equation." ] }
1703.09307
2952047388
We introduce a community detection algorithm (Fluid Communities) based on the idea of fluids interacting in an environment, expanding and contracting as a result of that interaction. Fluid Communities is based on the propagation methodology, which represents the state-of-the-art in terms of computational cost and scalability. While being highly efficient, Fluid Communities is able to find communities in synthetic graphs with an accuracy close to the current best alternatives. Additionally, Fluid Communities is the first propagation-based algorithm capable of identifying a variable number of communities in network. To illustrate the relevance of the algorithm, we evaluate the diversity of the communities found by Fluid Communities, and find them to be significantly different from the ones found by alternative methods.
A similar comparison of CD algorithms was previously reported by . In this work twelve algorithms were considered, some of them also present in the study of @cite_7 (Edge Betweenness, Fastgreedy, Multilevel and Infomap). In this study, the algorithms were compared under the GN benchmark, the LFR benchmark, and on random graphs. In their summary, authors recommend using various algorithms when studying the community structure of a graph for obtaining , and suggest Infomap, Multilevel and the Multiresolution algorithm @cite_13 as the best candidates. Results from both @cite_7 and @cite_2 indicate that the fastest CD algorithm is the well-known LPA algorithm, due to the efficiency and scalability of the propagation methodology.
{ "cite_N": [ "@cite_13", "@cite_7", "@cite_2" ], "mid": [ "1971729008", "1995996823", "" ], "abstract": [ "We use a Potts model community detection algorithm to accurately and quantitatively evaluate the hierarchical or multiresolution structure of a graph. Our multiresolution algorithm calculates correlations among multiple copies ( replicas'') of the same graph over a range of resolutions. Significant multiresolution structures are identified by strongly correlated replicas. The average normalized mutual information, the variation in information, and other measures, in principle, give a quantitative estimate of the best'' resolutions and indicate the relative strength of the structures in the graph. Because the method is based on information comparisons, it can, in principle, be used with any community detection model that can examine multiple resolutions. Our approach may be extended to other optimization problems. As a local measure, our Potts model avoids the resolution limit'' that affects other popular models. With this model, our community detection algorithm has an accuracy that ranks among the best of currently available methods. Using it, we can examine graphs over @math nodes and more than @math edges. We further report that the multiresolution variant of our algorithm can solve systems of at least @math nodes and @math edges on a single processor with exceptionally high accuracy. For typical cases, we find a superlinear scaling @math for community detection and @math for the multiresolution algorithm, where @math is the number of edges and @math is the number of nodes in the system.", "Uncovering the community structure exhibited by real networks is a crucial step toward an understanding of complex systems that goes beyond the local organization of their constituents. Many algorithms have been proposed so far, but none of them has been subjected to strict tests to evaluate their performance. Most of the sporadic tests performed so far involved small networks with known community structure and or artificial graphs with a simplified structure, which is very uncommon in real systems. Here we test several methods against a recently introduced class of benchmark graphs, with heterogeneous distributions of degree and community size. The methods are also tested against the benchmark by Girvan and Newman [Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)] and on random graphs. As a result of our analysis, three recent algorithms introduced by Rosvall and Bergstrom [Proc. Natl. Acad. Sci. U.S.A. 104, 7327 (2007); Proc. Natl. Acad. Sci. U.S.A. 105, 1118 (2008)], [J. Stat. Mech.: Theory Exp. (2008), P10008], and Ronhovde and Nussinov [Phys. Rev. E 80, 016109 (2009)] have an excellent performance, with the additional advantage of low computational complexity, which enables one to analyze large systems.", "" ] }
1703.09398
2604264634
The problem of fake news has gained a lot of attention as it is claimed to have had a significant impact on 2016 US Presidential Elections. Fake news is not a new problem and its spread in social networks is well-studied. Often an underlying assumption in fake news discussion is that it is written to look like real news, fooling the reader who does not check for reliability of the sources or the arguments in its content. Through a unique study of three data sets and features that capture the style and the language of articles, we show that this assumption is not true. Fake news in most cases is more similar to satire than to real news, leading us to conclude that persuasion in fake news is achieved through heuristics rather than the strength of arguments. We show overall title structure and the use of proper nouns in titles are very significant in differentiating fake from real. This leads us to conclude that fake news is targeted for audiences who are not likely to read beyond titles and is aimed at creating mental associations between entities and claims.
The spread of misinformation in networks has also been studied. Specifically, study the attention given to misinformation on Facebook. They show that users who often interact with alternative media are more prone to interact with intentional false claims @cite_18 . Very recently, launched a platform for tracking online misinformation called Hoaxy @cite_7 . Hoaxy gathers social news shares and fact-checking through a mix of web scraping, web syndication, and social network APIs. The goal of Hoaxy is to track both truthful and not truthful online information automatically. However, Hoaxy does not do any fact-checking of its own, rather relying on the efforts of fact-checkers such as snopes.com .
{ "cite_N": [ "@cite_18", "@cite_7" ], "mid": [ "2154861391", "2296706733" ], "abstract": [ "In this work we present a thorough quantitative analysis of information consumption patterns of qualitatively different information on Facebook. Pages are categorized, according to their topics and the communities of interests they pertain to, in a) alternative information sources (diffusing topics that are neglected by science and main stream media); b) online political activism; and c) main stream media. We find similar information consumption patterns despite the very different nature of contents. Then, we classify users according to their interaction patterns among the different topics and measure how they responded to the injection of 2788 false information (parodistic imitations of alternative stories). We find that users prominently interacting with alternative information sources – i.e. more exposed to unsubstantiated claims – are more prone to interact with intentional and parodistic false claims.", "Massive amounts of misinformation have been observed to spread in uncontrolled fashion across social media. Examples include rumors, hoaxes, fake news, and conspiracy theories. At the same time, several journalistic organizations devote significant efforts to high-quality fact checking of online claims. The resulting information cascades contain instances of both accurate and inaccurate information, unfold over multiple time scales, and often reach audiences of considerable size. All these factors pose challenges for the study of the social dynamics of online news sharing. Here we introduce Hoaxy, a platform for the collection, detection, and analysis of online misinformation and its related fact-checking efforts. We discuss the design of the platform and present a preliminary analysis of a sample of public tweets containing both fake news and fact checking. We find that, in the aggregate, the sharing of fact-checking content typically lags that of misinformation by 10-20 hours. Moreover, fake news are dominated by very active users, while fact checking is a more grass-roots activity. With the increasing risks connected to massive online misinformation, social news observatories have the potential to help researchers, journalists, and the general public understand the dynamics of real and fake news sharing." ] }
1703.09324
2951592014
We study algorithmic problems on subsets of Euclidean space of low fractal dimension. These spaces are the subject of intensive study in various branches of mathematics, including geometry, topology, and measure theory. There are several well-studied notions of fractal dimension for sets and measures in Euclidean space. We consider a definition of fractal dimension for finite metric spaces which agrees with standard notions used to empirically estimate the fractal dimension of various sets. We define the fractal dimension of some metric space to be the infimum @math , such that for any @math , for any ball @math of radius @math , and for any @math -net @math (that is, for any maximal @math -packing), we have @math . Using this definition we obtain faster algorithms for a plethora of classical problems on sets of low fractal dimension in Euclidean space. Our results apply to exact and fixed-parameter algorithms, approximation schemes, and spanner constructions. Interestingly, the dependence of the performance of these algorithms on the fractal dimension nearly matches the currently best-known dependence on the standard Euclidean dimension. Thus, when the fractal dimension is strictly smaller than the ambient dimension, our results yield improved solutions in all of these settings.
There is a large body of work on various notions of dimensionality in computational geometry. Most notably, there has been a lot of effort on determining the effect of doubling dimension on the complexity of many problems @cite_21 @cite_23 @cite_11 @cite_0 @cite_12 @cite_8 @cite_26 @cite_6 @cite_25 @cite_19 . Other notions that have been considered include low-dimensional negatively curved spaces @cite_9 , growth-restricted metrics @cite_2 , as well as generalizations of doubling dimension to metrics of bounded global growth @cite_7 .
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_8", "@cite_9", "@cite_21", "@cite_6", "@cite_0", "@cite_19", "@cite_23", "@cite_2", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2154201790", "1986948282", "", "2017848317", "2045134120", "2091221928", "2107342718", "2053578961", "2028047126", "2169036209", "181346458", "1769399376", "2144397897" ], "abstract": [ "Given a metric M = (V, d), a graph G = (V, E) is a t-spanner for M if every pair of nodes in V has a \"short\" path (i.e., of length at most t times their actual distance) between them in the spanner. Furthermore, this spanner has a hop diameter bounded by D if every such short path also uses at most D edges. We consider the problem of constructing sparse (1 + e)-spanners with small hop diameter for metrics of low doubling dimension.In this paper, we show that given any metric with constant doubling dimension k, and any 0 < e < 1, one can find a (1 + e)-spanner for the metric with nearly linear number of edges (i.e., only O(n log* n + ne-O(k)) edges) and a constant hop diameter, and also a (1 + e)-spanner with linear number of edges (i.e., only ne-O(k) edges) which achieves a hop diameter that grows like the functional inverse of the Ackermann's function. Moreover, we prove that such tradeoffs between the number of edges and the hop diameter are asymptotically optimal.", "The traveling salesman problem (TSP) is a canonical NP-complete problem which is proved by Trevisan [SIAM J. Comput., 30 (2000), pp. 475--485] to be MAX-SNP hard even on high-dimensional Euclidean metrics. To circumvent this hardness, researchers have been developing approximation schemes for „simpler” instances of the problem. For instance, the algorithms of Arora and of Talwar show how to approximate TSP on low-dimensional metrics (for different notions of metric dimension). However, a feature of most current notions of metric dimension is that they are „local”: the definitions require every local neighborhood to be well-behaved. In this paper, we define a global notion of dimension that generalizes the popular notion of doubling dimension, but still allows some small dense regions; e.g., it allows some metrics that contain cliques of size @math . Given a metric with global dimension @math , we give a @math -approximation algorithm that runs in subexponential time, i.e., in $ (O(n^...", "", "We initiate the study of approximate algorithms on negatively curved spaces. These spaces have recently become of interest in various domains of computer science including networking and vision. The classical example of such a space is the real-hyperbolic space H ^d for d 2, but our approach applies to a more general family of spaces characterized by Gromov's (combinatorial) hyperbolic condition. We give efficient algorithms and data structures for problems like approximate nearest-neighbor search and compact, low-stretch routing on subsets of negatively curved spaces of fixed dimension (including H ^d as a special case). In a different direction, we show that there is a PTAS for the Traveling Salesman Problem when the set of cities lie, for example, in H ^d. This generalizes Arora's results for R ^d. Most of our algorithms use the intrinsic distance geometry of the data set, and only need the existence of an embedding into some negatively curved space in order to function properly. In other words, our algorithms regard the interpoint distance function as a black box, and are independent of the representation of the input points.", "We present a near linear time algorithm for constructing hierarchical nets in finite metric spaces with constant doubling dimension. This data-structure is then applied to obtain improved algorithms for the following problems: approximate nearest neighbor search, well-separated pair decomposition, spanner construction, compact representation scheme, doubling measure, and computation of the (approximate) Lipschitz constant of a function. In all cases, the running (preprocessing) time is near linear and the space being used is linear.", "We study the problem of routing in doubling metrics, and show how to perform hierarchical routing in such metrics with small stretch and compact routing tables (i.e., with small amount of routing information stored at each vertex). We say that a metric (X, d) has doubling dimension dim(X) at most α if every set of diameter D can be covered by 2α sets of diameter D 2. (A doubling metric is one whose doubling dimension dim(X) is a constant.) We show how to perform (1 + τ)-stretch routing on metrics for any 0 0, we give algorithms to construct (1 + τ)-stretch spanners for a metric (X, d) with maximum degree at most (2 + 1 τ)O(dim(X)), matching the results of for Euclidean metrics.", "We define a natural notion of efficiency for approximate nearest-neighbor (ANN) search in general n-point metric spaces, namely the existence of a randomized algorithm which answers (1 + e)-ANN queries in polylog(n) time using only polynomial space. We then study which families of metric spaces admit efficient ANN schemes in the black-box model, where only oracle access to the distance function is given, and any query consistent with the triangle inequality may be asked.For e < 2 5, we offer a complete answer to this problem. Using the notion of metric dimension defined in [A. Gupta, et al, Bounded geometries, fractals, and low-distortion embeddings, in: 44th Annu. IEEE Symp. on Foundations of Computer Science, 2003, pp. 534-543] (a la [P. Assouad, Plongements lipschitziens dans Rn, Bull. Soc. Math. France 111 (4) (1983) 429-448]), we show that a metric space X admits an efficient (1+e)-ANN scheme for any e < 2 5 if and only if dim(X) = O(log log n). For coarser approximations, clearly the upper bound continues to hold, but there is a threshold at which our lower bound breaks down--this is precisely when points in the \"ambient space\" may begin to affect the complexity of \"hard\" subspaces S ⊆ X. Indeed, we give examples which show that dim(X) does not characterize the black-box complexity of ANN above the threshold.Our scheme for ANN in low-dimensional metric spaces is the first to yield efficient algorithms without relying on any additional assumptions on the input. In previous approaches (e.g., [K.L. Clarkson, Nearest neighbor queries in metric spaces, Discrete Comput. Geom. 22(1) (1999) 63-93; D. Karger, M. Ruhl, Finding nearest neighbors in growth-restricted metrics, in: 34th Annu. ACM Symp. on the Theory of Computing, 2002, pp. 63-66; R. Krauthgamer, J.R. Lee, Navigating nets: simple algorithms for proximity search, in: 15th Annu. ACM-SIAM Symp. on Discrete Algorithms, 2004, pp. 791-801; K. Hildrum, et al, A note on finding nearest neighbors in growth-restricted metrics, in: Proc. of the 15th Annu. ACM-SIAM Symp. on Discrete Algorithms, 2004, pp. 560-561]), even spaces with dim(X) = O(1) sometimes required Ω(n) query times.", "The doubling dimension of a metric is the smallest k such that any ball of radius 2r can be covered using 2k balls of radius r. This concept for abstract metrics has been proposed as a natural analog to the dimension of a Euclidean space. If we could embed metrics with low doubling dimension into low dimensional Euclidean spaces, they would inherit several algorithmic and structural properties of the Euclidean spaces. Unfortunately however, such a restriction on dimension does not suffice to guarantee embeddibility in a normed space.In this paper we explore the option of bypassing the embedding. In particular we show the following for low dimensional metrics: Quasi-polynomial time (1+e)-approximation algorithm for various optimization problems such as TSP, k-median and facility location. (1+e)-approximate distance labeling scheme with optimal label length. (1+e)-stretch polylogarithmic storage routing scheme.", "The Traveling Salesman Problem (TSP) is among the most famous NP-hard optimization problems. We design for this problem a randomized polynomial-time algorithm that computes a (1+µ)-approximation to the optimal tour, for any fixed µ>0, in TSP instances that form an arbitrary metric space with bounded intrinsic dimension. The celebrated results of Arora [Aro98] and Mitchell [Mit99] prove that the above result holds in the special case of TSP in a fixed-dimensional Euclidean space. Thus, our algorithm demonstrates that the algorithmic tractability of metric TSP depends on the dimensionality of the space and not on its specific geometry. This result resolves a problem that has been open since the quasi-polynomial time algorithm of Talwar [Tal04].", "Most research on nearest neighbor algorithms in the literature has been focused on the Euclidean case. In many practical search problems however, the underlying metric is non-Euclidean. Nearest neighbor algorithms for general metric spaces are quite weak, which motivates a search for other classes of metric spaces that can be tractably searched.In this paper, we develop an efficient dynamic data structure for nearest neighbor queries in growth-constrained metrics. These metrics satisfy the property that for any point q and number r the ratio between numbers of points in balls of radius 2r and r is bounded by a constant. Spaces of this kind may occur in networking applications, such as the Internet or Peer-to-peer networks, and vector quantization applications, where feature vectors fall into low-dimensional manifolds within high-dimensional vector spaces.", "In the online minimum-cost metric matching problem, we are given an instance of a metric space with k servers, and must match arriving requests to as-yet-unmatched servers to minimize the total distance from the requests to their assigned servers. We study this problem for the line metric and for doubling metrics in general. We give O(logk)-competitive randomized algorithms, which reduces the gap between the current O(log2k)-competitive randomized algorithms and the constant-competitive lower bounds known for these settings. We first analyze the \"harmonic\" algorithm for the line, that for each request chooses one of its two closest servers with probability inversely proportional to the distance to that server; this is O(logk)-competitive, with suitable guess-and-double steps to ensure that the metric has aspect ratio polynomial in k. The second algorithm embeds the metric into a random HST, and picks a server randomly from among the closest available servers in the HST, with the selection based upon how the servers are distributed within the tree. This algorithm is O(1)-competitive for HSTs obtained from embedding doubling metrics, and hence gives a randomized O(logk)-competitive algorithm for doubling metrics.", "A t-spanner is a graph on a set of points Swith the following property: Between any pair of points there is a path in the spanner whose total length is at most ttimes the actual distance between the points. In this paper, we consider points residing in a metric space equipped with doubling dimension i¾?, and show how to construct a dynamic (1 + i¾?)-spanner with degree i¾?i¾? O(i¾?)in @math update time. When i¾?and i¾?are taken as constants, the degree and update times are optimal.", "We present a new data structure that facilitates approximate nearest neighbor searches on a dynamic set of points in a metric space that has a bounded doubling dimension. Our data structure has linear size and supports insertions and deletions in O(log n) time, and finds a (1+e)-approximate nearest neighbor in time O(log n) + (1 e)O(1). The search and update times hide multiplicative factors that depend on the doubling dimension; the space does not. These performance times are independent of the aspect ratio (or spread) of the points." ] }
1703.09390
2605330842
Policy analysts wish to visualize a range of policies for large simulator-defined Markov Decision Processes (MDPs). One visualization approach is to invoke the simulator to generate on-policy trajectories and then visualize those trajectories. When the simulator is expensive, this is not practical, and some method is required for generating trajectories for new policies without invoking the simulator. The method of Model-Free Monte Carlo (MFMC) can do this by stitching together state transitions for a new policy based on previously-sampled trajectories from other policies. This "off-policy Monte Carlo simulation" method works well when the state space has low dimension but fails as the dimension grows. This paper describes a method for factoring out some of the state and action variables so that MFMC can work in high-dimensional MDPs. The new method, MFMCi, is evaluated on a very challenging wildfire management MDP.
Instead of pursuing these two approaches, we adopted the method of Model-Free Monte Carlo (MFMC). In MFMC, the model is replaced by a database of transitions computed from the slow simulator. MFMC is model-free'' in the sense that it does not learn an explicit model of the transition probabilities. In effect, the database constitutes the transition model (c.f., Dyna; @cite_2 ).
{ "cite_N": [ "@cite_2" ], "mid": [ "1491843047" ], "abstract": [ "This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments." ] }
1703.09096
2964333211
In this work we generalize the Jacobi-Davidson method to the case when eigenvector can be reshaped into a low-rank matrix. In this setting the proposed method inherits advantages of the original Jacobi-Davidson method, has lower complexity and requires less storage. We also introduce low-rank version of the Rayleigh quotient iteration which naturally arises in the Jacobi-Davidson method.
There are two standard ways to solve eigenvalue problems in low-rank format: optimization of Rayleigh quotient based on alternating minimization, which accounts for multilinear structure of the decomposition, and iterative methods with rank truncation. The first approach has been developed for a long time in the matrix product state community @cite_21 @cite_22 @cite_7 . We also should mention altenating minimization algorithms that were recently proposed in the mathematical community. They are based either on the alternating linear scheme (ALS) procedure @cite_4 @cite_2 or on basis enrichment using alternating minimal energy method (AMEn) @cite_14 @cite_1 . Rank truncated iterative methods include power method @cite_26 @cite_18 , inverse iteration @cite_16 , locally optimal block preconditioned conjugate gradient method @cite_0 @cite_3 @cite_24 . For more information about eigensolvers in low-rank formats see @cite_17 . To our knowledge no generalization of the Jacobi-Davidson method was considered.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_22", "@cite_7", "@cite_26", "@cite_21", "@cite_1", "@cite_3", "@cite_0", "@cite_24", "@cite_2", "@cite_16", "@cite_17" ], "mid": [ "2073469810", "2008657736", "2031216664", "2037768897", "2238996074", "1980428773", "2122816100", "1916761266", "2073599217", "1967402714", "1531635645", "2063125621", "2105812912", "1967077133" ], "abstract": [ "Nearly every numerical analysis algorithm has computational complexity that scales exponentially in the underlying physical dimension. The separated representation, introduced previously, allows many operations to be performed with scaling that is formally linear in the dimension. In this paper we further develop this representation by (i) discussing the variety of mechanisms that allow it to be surprisingly efficient; (ii) addressing the issue of conditioning; (iii) presenting algorithms for solving linear systems within this framework; and (iv) demonstrating methods for dealing with antisymmetric functions, as arise in the multiparticle Schrodinger equation in quantum mechanics. Numerical examples are given.", "We consider the solution of large-scale symmetric eigenvalue problems for which it is known that the eigenvectors admit a low-rank tensor approximation. Such problems arise, for example, from the discretization of high-dimensional elliptic PDE eigenvalue problems or in strongly correlated spin systems. Our methods are built on imposing low-rank (block) tensor train (TT) structure on the trace minimization characterization of the eigenvalues. The common approach of alternating optimization is combined with an enrichment of the TT cores by (preconditioned) gradients, as recently proposed by Dolgov and Savostyanov for linear systems. This can equivalently be viewed as a subspace correction technique. Several numerical experiments demonstrate the performance gains from using this technique.", "Recent achievements in the field of tensor product approximation provide promising new formats for the representation of tensors in form of tree tensor networks. In contrast to the canonical @math -term representation (CANDECOMP, PARAFAC), these new formats provide stable representations, while the amount of required data is only slightly larger. The tensor train (TT) format [SIAM J. Sci. Comput., 33 (2011), pp. 2295-2317], a simple special case of the hierarchical Tucker format [J. Fourier Anal. Appl., 5 (2009), p. 706], is a useful prototype for practical low-rank tensor representation. In this article, we show how optimization tasks can be treated in the TT format by a generalization of the well-known alternating least squares (ALS) algorithm and by a modified approach (MALS) that enables dynamical rank adaptation. A formulation of the component equations in terms of so-called retraction operators helps to show that many structural properties of the original problems transfer to the micro-iterations, giving what is to our knowledge the first stable generic algorithm for the treatment of optimization tasks in the tensor format. For the examples of linear equations and eigenvalue equations, we derive concrete working equations for the micro-iteration steps; numerical examples confirm the theoretical results concerning the stability of the TT decomposition and of ALS and MALS but also show that in some cases, high TT ranks are required during the iterative approximation of low-rank tensors, showing some potential of improvement.", "A generalization of the numerical renormalization-group procedure used first by Wilson for the Kondo problem is presented. It is shown that this formulation is optimal in a certain sense. As a demonstration of the effectiveness of this approach, results from numerical real-space renormalization-group calculations for Heisenberg chains are presented.", "The density matrix renormalization group discovered by White is investigated. In the case where renormalization eventually converges to a fixed point we show that quantum states in the thermodynamic limit with periodic boundary conditions can be simply represented by a matrix product ground state'' with a natural description of Bloch states of elementary excitations. We then observe that these states can be rederived through a simple variational ansatz making no reference to a renormalization construction. The method is tested on the spin-1 Heisenberg model.", "When an algorithm in dimension one is extended to dimension d, in nearly every case its computational cost is taken to the power d. This fundamental difficulty is the single greatest impediment to solving many important problems and has been dubbed the curse of dimensionality. For numerical analysis in dimension d, we propose to use a representation for vectors and matrices that generalizes separation of variables while allowing controlled accuracy. Basic linear algebra operations can be performed in this representation using one-dimensional operations, thus bypassing the exponential scaling with respect to the dimension. Although not all operators and algorithms may be compatible with this representation, we believe that many of the most important ones are. We prove that the multiparticle Schrodinger operator, as well as the inverse Laplacian, can be represented very efficiently in this form. We give numerical evidence to support the conjecture that eigenfunctions inherit this property by computing the ground-state eigenfunction for a simplified Schrodinger operator with 30 particles. We conjecture and provide numerical evidence that functions of operators inherit this property, in which case numerical operator calculus in higher dimensions becomes feasible.", "Abstract The density-matrix renormalization group method (DMRG) has established itself over the last decade as the leading method for the simulation of the statics and dynamics of one-dimensional strongly correlated quantum lattice systems. In the further development of the method, the realization that DMRG operates on a highly interesting class of quantum states, so-called matrix product states (MPS), has allowed a much deeper understanding of the inner structure of the DMRG method, its further potential and its limitations. In this paper, I want to give a detailed exposition of current DMRG thinking in the MPS language in order to make the advisable implementation of the family of DMRG algorithms in exclusively MPS terms transparent. I then move on to discuss some directions of potentially fruitful further algorithmic development: while DMRG is a very mature method by now, I still see potential for further improvements, as exemplified by a number of recently introduced algorithms.", "Given in the title are two algorithms to compute the extreme eigenstate of a high-dimensional Hermitian matrix using the tensor train (TT) matrix product states (MPS) representation. Both methods empower the traditional alternating direction scheme with the auxiliary (e.g. gradient) information, which substantially improves the convergence in many difficult cases. Being conceptually close, these methods have different derivation, implementation, theoretical and practical properties. We emphasize the differences, and reproduce the numerical example to compare the performance of two algorithms.", "A method for solving a partial algebraic eigenvalues problem is constructed. It exploits tensor structure of eigenvectors in two-dimensional case. For a symmetric matrix represented in tensor format, the method finds low-rank approximations to the eigenvectors corresponding to the smallest eigenvalues. For sparse matrices, execution time and required memory for the proposed method are proportional to the square root of miscellaneous overall number of unknowns, whereas this dependence is usually linear. To maintain tensor structure of vectors at each iteration step, low-rank approximations are performed, which introduces errors into the original method. Nevertheless, the new method was proved to converge. Convergence rate estimates are obtained for various tensor modifications of the abstract one-step method. It is shown how the convergence of a multistep method can be derived from the convergence of the corresponding one-step method. Several modifications of the method with an low-rank approximation techniques were implemented on the basis of the block conjugate gradient method. Their performance is compared on numerical examples.", "We consider elliptic PDE eigenvalue problems on a tensorized domain, discretized such that the resulting matrix eigenvalue problem Ax λx exhibits Kronecker product structure. In particular, we are concerned with the case of high dimensions, where standard approaches to the solution of matrix eigenvalue problems fail due to the exponentially growing degrees of freedom. Recent work shows that this curse of dimensionality can in many cases be addressed by approximating the desired solution vector x in a low-rank tensor format. In this paper, we use the hierarchical Tucker decomposition to develop a low-rank variant of LOBPCG, a classical preconditioned eigenvalue solver. We also show how the ALS and MALS (DMRG) methods known from computational quantum physics can be adapted to the hierarchical Tucker decomposition. Finally, a combination of ALS and MALS with LOBPCG and with our low-rank variant is proposed. A number of numerical experiments indicate that such combinations represent the methods of choice.", "", "We consider approximate computation of several minimal eigenpairs of large Hermitian matrices which come from high-dimensional problems. We use the tensor train (TT) format for vectors and matrices to overcome the curse of dimensionality and make storage and computational cost feasible. We approximate several low-lying eigenvectors simultaneously in the block version of the TT format. The computation is done by the alternating minimization of the block Rayleigh quotient sequentially for all TT cores. The proposed method combines the advances of the density matrix renormalization group (DMRG) and the variational numerical renormalization group (vNRG) methods. We compare the performance of the proposed method with several versions of the DMRG codes, and show that it may be preferable for systems with large dimension and or mode size, or when a large number of eigenstates is sought.", "SUMMARY We investigate approximations to eigenfunctions of a certain class of elliptic operators in Rd by finite sums of products of functions with separated variables and especially conditions providing an exponential decrease of the error with respect to the number of terms. The consistent use of tensor formats can be regarded as a base for a new class of rank-truncated iterative eigensolvers. The computational cost is almost linear in the univariate problem size n, while traditional method scale like nd. Tensor methods can be applied to solving large-scale spectral problems in computational quantum chemistry, for example, the Schrodinger, Hartree–Fock and Kohn–Sham equations in electronic structure calculations. The results of numerical experiments clearly indicate the linear-logarithmic scaling of the low-rank tensor method in n. The algorithms work equally well for the computation of both minimal and maximal eigenvalues of the discrete elliptic operators. Copyright © 2011 John Wiley & Sons, Ltd.", "During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors. (© 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)" ] }
1703.09096
2964333211
In this work we generalize the Jacobi-Davidson method to the case when eigenvector can be reshaped into a low-rank matrix. In this setting the proposed method inherits advantages of the original Jacobi-Davidson method, has lower complexity and requires less storage. We also introduce low-rank version of the Rayleigh quotient iteration which naturally arises in the Jacobi-Davidson method.
In @cite_27 authors consider inexact Riemannian Newton method for solving linear systems with a low-rank solution. They also omit the curvature part in the Hessian and utilize specific structure of the operator to construct a preconditioner.
{ "cite_N": [ "@cite_27" ], "mid": [ "1940947342" ], "abstract": [ "The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow one to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding low-rank tensor manifolds: a Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorit..." ] }
1703.09096
2964333211
In this work we generalize the Jacobi-Davidson method to the case when eigenvector can be reshaped into a low-rank matrix. In this setting the proposed method inherits advantages of the original Jacobi-Davidson method, has lower complexity and requires less storage. We also introduce low-rank version of the Rayleigh quotient iteration which naturally arises in the Jacobi-Davidson method.
In @cite_13 authors proposed a version of inverse iteration based on the alternating linear scheme ALS procedure, which is similar to . By contrast, the present work considers inverse iteration on the whole tangent space. We also provide an interpretation of the method as an inexact Newton method.
{ "cite_N": [ "@cite_13" ], "mid": [ "2404706163" ], "abstract": [ "We propose a new algorithm for calculation of vibrational spectra of molecules using tensor train decomposition. Under the assumption that eigenfunctions lie on a low-parametric manifold of low-rank tensors we suggest using well-known iterative methods that utilize matrix inversion (locally optimal block preconditioned conjugate gradient method, inverse iteration) and solve corresponding linear systems inexactly along this manifold. As an application, we accurately compute vibrational spectra (84 states) of acetonitrile molecule CH3CN on a laptop in one hour using only 100 MB of memory to represent all computed eigenfunctions." ] }
1703.08866
2951620021
Visual scene understanding is an important capability that enables robots to purposefully act in their environment. In this paper, we propose a novel approach to object-class segmentation from multiple RGB-D views using deep learning. We train a deep neural network to predict object-class semantics that is consistent from several view points in a semi-supervised way. At test time, the semantics predictions of our network can be fused more consistently in semantic keyframe maps than predictions of a network trained on individual views. We base our network architecture on a recent single-view deep learning approach to RGB and depth fusion for semantic object-class segmentation and enhance it with multi-scale loss minimization. We obtain the camera trajectory using RGB-D SLAM and warp the predictions of RGB-D images into ground-truth annotated frames in order to enforce multi-view consistency during training. At test time, predictions from multiple views are fused into keyframes. We propose and analyze several methods for enforcing multi-view consistency during training and testing. We evaluate the benefit of multi-view consistency training and demonstrate that pooling of deep features and fusion over multiple views outperforms single-view baselines on the NYUDv2 benchmark for semantic segmentation. Our end-to-end trained network achieves state-of-the-art performance on the NYUDv2 dataset in single-view segmentation as well as multi-view semantic fusion.
Semantic SLAM. In the domain of semantic SLAM, Salas-Moreno al @cite_4 developed the SLAM++ algorithm to perform RGB-D tracking and mapping at the object instance level. Hermans al @cite_7 proposed 3D semantic mapping for indoor RGB-D sequences based on RGB-D visual odometry and a random forest classifier that performs semantic image segmentation. The individual frame segmentations are projected into 3D and smoothed using a CRF on the point cloud. St "uckler al @cite_1 perform RGB-D SLAM and probabilistically fuse the semantic segmentations of individual frames obtained with a random forest in multi-resolution voxel maps. Recently, Armeni al @cite_16 propose a hierarchical parsing method for large-scale 3D point clouds of indoor environments. They first seperate point clouds into disjoint spaces, single rooms, and then further cluster points at the object level according to handcrafted features.
{ "cite_N": [ "@cite_1", "@cite_16", "@cite_4", "@cite_7" ], "mid": [ "2056610823", "2460657278", "2097696373", "2033979122" ], "abstract": [ "We propose a real-time approach to learn semantic maps from moving RGB-D cameras. Our method models geometry, appearance, and semantic labeling of surfaces. We recover camera pose using simultaneous localization and mapping while concurrently recognizing and segmenting object classes in the images. Our object-class segmentation approach is based on random decision forests and yields a dense probabilistic labeling of each image. We implemented it on GPU to achieve a high frame rate. The probabilistic segmentation is fused in octree-based 3D maps within a Bayesian framework. In this way, image segmentations from various view points are integrated within a 3D map which improves segmentation quality. We evaluate our system on a large benchmark dataset and demonstrate state-of-the-art recognition performance of our object-class segmentation and semantic mapping approaches.", "In this paper, we propose a method for semantic parsing the 3D point cloud of an entire building using a hierarchical approach: first, the raw data is parsed into semantically meaningful spaces (e.g. rooms, etc) that are aligned into a canonical reference coordinate system. Second, the spaces are parsed into their structural and building elements (e.g. walls, columns, etc). Performing these with a strong notation of global 3D space is the backbone of our method. The alignment in the first step injects strong 3D priors from the canonical coordinate system into the second step for discovering elements. This allows diverse challenging scenarios as man-made indoor spaces often show recurrent geometric patterns while the appearance features can change drastically. We also argue that identification of structural elements in indoor spaces is essentially a detection problem, rather than segmentation which is commonly used. We evaluated our method on a new dataset of several buildings with a covered area of over 6, 000m2 and over 215 million points, demonstrating robust results readily useful for practical applications.", "We present the major advantages of a new 'object oriented' 3D SLAM paradigm, which takes full advantage in the loop of prior knowledge that many scenes consist of repeated, domain-specific objects and structures. As a hand-held depth camera browses a cluttered scene, real-time 3D object recognition and tracking provides 6DoF camera-object constraints which feed into an explicit graph of objects, continually refined by efficient pose-graph optimisation. This offers the descriptive and predictive power of SLAM systems which perform dense surface reconstruction, but with a huge representation compression. The object graph enables predictions for accurate ICP-based camera to model tracking at each live frame, and efficient active search for new objects in currently undescribed image regions. We demonstrate real-time incremental SLAM in large, cluttered environments, including loop closure, relocalisation and the detection of moved objects, and of course the generation of an object level scene description with the potential to enable interaction.", "Dense semantic segmentation of 3D point clouds is a challenging task. Many approaches deal with 2D semantic segmentation and can obtain impressive results. With the availability of cheap RGB-D sensors the field of indoor semantic segmentation has seen a lot of progress. Still it remains unclear how to deal with 3D semantic segmentation in the best way. We propose a novel 2D-3D label transfer based on Bayesian updates and dense pairwise 3D Conditional Random Fields. This approach allows us to use 2D semantic segmentations to create a consistent 3D semantic reconstruction of indoor scenes. To this end, we also propose a fast 2D semantic segmentation approach based on Randomized Decision Forests. Furthermore, we show that it is not needed to obtain a semantic segmentation for every frame in a sequence in order to create accurate semantic 3D reconstructions. We evaluate our approach on both NYU Depth datasets and show that we can obtain a significant speed-up compared to other methods." ] }
1703.08770
2951720371
Chest X-ray (CXR) is one of the most commonly prescribed medical imaging procedures, often with over 2-10x more scans than other imaging modalities such as MRI, CT scan, and PET scans. These voluminous CXR scans place significant workloads on radiologists and medical practitioners. Organ segmentation is a crucial step to obtain effective computer-aided detection on CXR. In this work, we propose Structure Correcting Adversarial Network (SCAN) to segment lung fields and the heart in CXR images. SCAN incorporates a critic network to impose on the convolutional segmentation network the structural regularities emerging from human physiology. During training, the critic network learns to discriminate between the ground truth organ annotations from the masks synthesized by the segmentation network. Through this adversarial process the critic network learns the higher order structures and guides the segmentation model to achieve realistic segmentation outcomes. Extensive experiments show that our method produces highly accurate and natural segmentation. Using only very limited training data available, our model reaches human-level performance without relying on any existing trained model or dataset. Our method also generalizes well to CXR images from a different patient population and disease profiles, surpassing the current state-of-the-art.
Lung Field Segmentation. Existing work on lung field segmentation broadly falls into three categories @cite_35 . (1) Rule-based systems apply pre-defined set of thresholding and morphological operations that are derived from heuristics @cite_20 . (2) Pixel classification methods classify the pixels as inside or outside of the lung fields based on pixel intensities @cite_4 @cite_17 @cite_33 @cite_11 . (3) More recent methods are based on deformable models such as Active Shape Model (ASM) and Active Appearance Model @cite_6 @cite_22 @cite_30 @cite_2 @cite_18 @cite_19 @cite_14 @cite_16 . Their performance can be highly variable due to the tuning parameters and whether shape model is initialized to the actual boundaries. Also, the high contrast between rib cage and lung fields can cause the model to be trapped in local minima. Our approach uses convolutional networks to perform end-to-end training from images to pixel masks without using ad hoc features. The proposed adversarial training further incorporates prior structural knowledge in a unified framework.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_11", "@cite_4", "@cite_33", "@cite_22", "@cite_14", "@cite_6", "@cite_19", "@cite_2", "@cite_16", "@cite_20", "@cite_17" ], "mid": [ "2153810084", "2104775919", "2141453390", "2120621413", "2164883865", "2131706783", "2152826865", "2150040483", "", "", "2115597079", "", "2030928387", "" ], "abstract": [ "An active shape model segmentation scheme is presented that is steered by optimal local features, contrary to normalized first order derivative profiles, as in the original formulation [Cootes and Taylor, 1995, 1999, and 2001]. A nonlinear kNN-classifier is used, instead of the linear Mahalanobis distance, to find optimal displacements for landmarks. For each of the landmarks that describe the shape, at each resolution level taken into account during the segmentation optimization procedure, a distinct set of optimal features is determined. The selection of features is automatic, using the training images and sequential feature forward and backward selection. The new approach is tested on synthetic data and in four medical segmentation tasks: segmenting the right and left lung fields in a database of 230 chest radiographs, and segmenting the cerebellum and corpus callosum in a database of 90 slices from MRI brain images. In all cases, the new method produces significantly better results in terms of an overlap error measure (p<0.001 using a paired T-test) than the original active shape model scheme.", "The traditional chest radiograph is still ubiquitous in clinical practice, and will likely remain so for quite some time. Yet, its interpretation is notoriously difficult. This explains the continued interest in computer-aided diagnosis for chest radiography. The purpose of this survey is to categorize and briefly review the literature on computer analysis of chest images, which comprises over 150 papers published in the last 30 years. Remaining challenges are indicated and some directions for future research are given.", "This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. There are two novelties in the proposed deformable model. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel. Second, the deformable contour is constrained by both population-based and patient-specific shape statistics, and it yields more robust and accurate segmentation of lung fields for serial chest radiographs. In particular, for segmenting the initial time-point images, the population-based shape statistics is used to constrain the deformable contour; as more subsequent images of the same patient are acquired, the patient-specific shape statistics online collected from the previous segmentation results gradually takes more roles. Thus, this patient-specific shape statistics is updated each time when a new segmentation result is obtained, and it is further used to refine the segmentation results of all the available time-point images. Experimental results show that the proposed method is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.", "In this work, a level set energy for segmenting the lungs from digital Posterior-Anterior (PA) chest x-ray images is presented. The primary challenge in using active contours for lung segmentation is local minima due to shading effects and presence of strong edges due to the rib cage and clavicle. We have used the availability of good contrast at the lung boundaries to extract a multi-scale set of edge corner feature points and drive our active contour model using these features. We found these features when supplemented with a simple region based data term and a shape term based on the average lung shape, able to handle the above local minima issues. The algorithm was tested on 1130 clinical images, giving promising results.", "An algorithm for detection of posterior rib borders in chest radiographs is presented. The algorithm first determines the thoracic cage boundary to restrict the area of search for the ribs. It then finds approximate rib borders using a knowledge-based Hough transform. Finally, the algorithm localizes the rib borders using an active contour model. Results of the proposed rib finding algorithm on 10 chest radiographs are presented. >", "The task of segmenting the posterior ribs within the lung fields of standard posteroanterior chest radiographs is considered. To this end, an iterative, pixel-based, supervised, statistical classification method is used, which is called iterated contextual pixel classification (ICPC). Starting from an initial rib segmentation obtained from pixel classification, ICPC updates it by reclassifying every pixel, based on the original features and, additionally, class label information of pixels in the neighborhood of the pixel to be reclassified. The method is evaluated on 30 radiographs taken from the JSRT (Japanese Society of Radiological Technology) database. All posterior ribs within the lung fields in these images have been traced manually by two observers. The first observer's segmentations are set as the gold standard; ICPC is trained using these segmentations. In a sixfold cross-validation experiment, ICPC achieves a classification accuracy of 0.86 spl plusmn 0.06, as compared to 0.94 spl plusmn 0.02 for the second human observer.", "We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors.", "A new generic model-based segmentation algorithm is presented, which can be trained from examples akin to the active shape model (ASM) approach in order to acquire knowledge about the shape to be segmented and about the gray-level appearance of the object in the image. Whereas ASM alternates between shape and intensity information during search, the proposed approach optimizes for shape and intensity characteristics simultaneously. Local gray-level appearance information at the landmark points extracted from feature images is used to automatically detect a number of plausible candidate locations for each landmark. The shape information is described by multiple landmark-specific statistical models that capture local dependencies between adjacent landmarks on the shape. The shape and intensity models are combined in a single cost function that is optimized noniteratively using dynamic programming, without the need for initialization. The algorithm was validated for segmentation of anatomical structures in chest and hand radiographs. In each experiment, the presented method had a significant higher performance when compared to the ASM schemes. As the method is highly effective, optimally suited for pathological cases and easy to implement, it is highly useful for many medical image segmentation tasks.", "", "", "A fully automatic method is presented to detect abnormalities in frontal chest radiographs which are aggregated into an overall abnormality score. The method is aimed at finding abnormal signs of a diffuse textural nature, such as they are encountered in mass chest screening against tuberculosis (TB). The scheme starts with automatic segmentation of the lung fields, using active shape models. The segmentation is used to subdivide the lung fields into overlapping regions of various sizes. Texture features are extracted from each region, using the moments of responses to a multiscale filter bank. Additional \"difference features\" are obtained by subtracting feature vectors from corresponding regions in the left and right lung fields. A separate training set is constructed for each region. All regions are classified by voting among the k nearest neighbors, with leave-one-out. Next, the classification results of each region are combined, using a weighted multiplier in which regions with higher classification reliability weigh more heavily. This produces an abnormality score for each image. The method is evaluated on two databases. The first database was collected from a TB mass chest screening program, from which 147 images with textural abnormalities and 241 normal images were selected. Although this database contains many subtle abnormalities, the classification has a sensitivity of 0.86 at a specificity of 0.50 and an area under the receiver operating characteristic (ROC) curve of 0.820. The second database consist of 100 normal images and 100 abnormal images with interstitial disease. For this database, the results were a sensitivity of 0.97 at a specificity of 0.90 and an area under the ROC curve of 0.986.", "", "Abstract Rationale and Objectives The authors performed this study to evaluate an algorithm developed to help identify lungs on chest radiographs. Materials and Methods Forty clinical posteroanterior chest radiographs obtained in adult patients were digitized to 12-bit gray-scale resolution. In the proposed algorithm, the authors simplified the current approach of edge detection with derivatives by using only the first derivative of the horizontal and or vertical image profiles. In addition to the derivative method, pattern classification and image feature analysis were used to determine the region of interest and lung boundaries. Instead of using the traditional curve-fitting method to delineate the lung, the authors applied an iterative contour-smoothing algorithm to each of the four detected boundary segments (costal, mediastinal, lung apex, and hemidiaphragm edges) to form a smooth lung boundary. Results The algorithm had an average accuracy of 96.0 for the right lung and 95.2 for the left lung and was especially useful in the delineation of hemidiaphragm edges. In addition, it took about 0.775 second per image to identify the lung boundaries, which is much faster than that of other algorithms noted in the literature. Conclusion The computer-generated segmentation results can be used directly in the detection and compensation of rib structures and in lung nodule detection.", "" ] }
1703.08770
2951720371
Chest X-ray (CXR) is one of the most commonly prescribed medical imaging procedures, often with over 2-10x more scans than other imaging modalities such as MRI, CT scan, and PET scans. These voluminous CXR scans place significant workloads on radiologists and medical practitioners. Organ segmentation is a crucial step to obtain effective computer-aided detection on CXR. In this work, we propose Structure Correcting Adversarial Network (SCAN) to segment lung fields and the heart in CXR images. SCAN incorporates a critic network to impose on the convolutional segmentation network the structural regularities emerging from human physiology. During training, the critic network learns to discriminate between the ground truth organ annotations from the masks synthesized by the segmentation network. Through this adversarial process the critic network learns the higher order structures and guides the segmentation model to achieve realistic segmentation outcomes. Extensive experiments show that our method produces highly accurate and natural segmentation. Using only very limited training data available, our model reaches human-level performance without relying on any existing trained model or dataset. Our method also generalizes well to CXR images from a different patient population and disease profiles, surpassing the current state-of-the-art.
The current state-of-the-art method for lung field segmentation uses registration-based approach @cite_12 . To build a lung model for a test patient, @cite_12 finds patients in an existing database that are most similar to the test patient and perform linear deformation of their lung profiles based on key point matching. This approach relies on the test patients being well modeled by the existing lung profiles and correctly matched key points, both of which can be brittle on a different population.
{ "cite_N": [ "@cite_12" ], "mid": [ "1994062553" ], "abstract": [ "The National Library of Medicine (NLM) is developing a digital chest X-ray (CXR) screening system for deployment in resource constrained communities and developing countries worldwide with a focus on early detection of tuberculosis. A critical component in the computer-aided diagnosis of digital CXRs is the automatic detection of the lung regions. In this paper, we present a nonrigid registration-driven robust lung segmentation method using image retrieval-based patient specific adaptive lung models that detects lung boundaries, surpassing state-of-the-art performance. The method consists of three main stages: 1) a content-based image retrieval approach for identifying training images (with masks) most similar to the patient CXR using a partial Radon transform and Bhattacharyya shape similarity measure, 2) creating the initial patient-specific anatomical model of lung shape using SIFT-flow for deformable registration of training masks to the patient CXR, and 3) extracting refined lung boundaries using a graph cuts optimization approach with a customized energy function. Our average accuracy of 95.4 on the public JSRT database is the highest among published results. A similar degree of accuracy of 94.1 and 91.7 on two new CXR datasets from Montgomery County, MD, USA, and India, respectively, demonstrates the robustness of our lung segmentation approach." ] }
1703.08770
2951720371
Chest X-ray (CXR) is one of the most commonly prescribed medical imaging procedures, often with over 2-10x more scans than other imaging modalities such as MRI, CT scan, and PET scans. These voluminous CXR scans place significant workloads on radiologists and medical practitioners. Organ segmentation is a crucial step to obtain effective computer-aided detection on CXR. In this work, we propose Structure Correcting Adversarial Network (SCAN) to segment lung fields and the heart in CXR images. SCAN incorporates a critic network to impose on the convolutional segmentation network the structural regularities emerging from human physiology. During training, the critic network learns to discriminate between the ground truth organ annotations from the masks synthesized by the segmentation network. Through this adversarial process the critic network learns the higher order structures and guides the segmentation model to achieve realistic segmentation outcomes. Extensive experiments show that our method produces highly accurate and natural segmentation. Using only very limited training data available, our model reaches human-level performance without relying on any existing trained model or dataset. Our method also generalizes well to CXR images from a different patient population and disease profiles, surpassing the current state-of-the-art.
We note that there is a growing body of recent works that apply neural networks end-to-end on CXR images @cite_8 @cite_9 . These models directly output clinical targets such as disease labels without well-defined intermediate outputs to aid interpretability. Furthermore, they generally require a large number of CXR images for training, which is not readily available for many clinical tasks involving CXR images.
{ "cite_N": [ "@cite_9", "@cite_8" ], "mid": [ "2580967590", "2950489286" ], "abstract": [ "X-rays are commonly performed imaging tests that use small amounts of radiation to produce pictures of the organs, tissues, and bones of the body. X-rays of the chest are used to detect abnormalities or diseases of the airways, blood vessels, bones, heart, and lungs. In this work we present a stochastic attention-based model that is capable of learning what regions within a chest X-ray scan should be visually explored in order to conclude that the scan contains a specific radiological abnormality. The proposed model is a recurrent neural network (RNN) that learns to sequentially sample the entire X-ray and focus only on informative areas that are likely to contain the relevant information. We report on experiments carried out with more than @math X-rays containing enlarged hearts or medical devices. The model has been trained using reinforcement learning methods to learn task-specific policies.", "Despite the recent advances in automatically describing image contents, their applications have been mostly limited to image caption datasets containing natural images (e.g., Flickr 30k, MSCOCO). In this paper, we present a deep learning model to efficiently detect a disease from an image and annotate its contexts (e.g., location, severity and the affected organs). We employ a publicly available radiology dataset of chest x-rays and their reports, and use its image annotations to mine disease names to train convolutional neural networks (CNNs). In doing so, we adopt various regularization techniques to circumvent the large normal-vs-diseased cases bias. Recurrent neural networks (RNNs) are then trained to describe the contexts of a detected disease, based on the deep CNN features. Moreover, we introduce a novel approach to use the weights of the already trained pair of CNN RNN on the domain-specific image text dataset, to infer the joint image text contexts for composite image labeling. Significantly improved image annotation results are demonstrated using the recurrent neural cascade model by taking the joint image text contexts into account." ] }
1703.08947
2953233897
Over the past decade, online social networks (OSNs) such as Twitter and Facebook have thrived and experienced rapid growth to over 1 billion users. A major evolution would be to leverage the characteristics of OSNs to evaluate the effectiveness of the many routing schemes developed by the research community in real-world scenarios. In this paper, we showcase the Secure Opportunistic Schemes (SOS) middleware which allows different routing schemes to be easily implemented relieving the burden of security and connection establishment. The feasibility of creating a delay tolerant social network is demonstrated by using SOS to power AlleyOop Social, a secure delay tolerant networking research platform that serves as a real-life mobile social networking application for iOS devices. SOS and AlleyOop Social allow users to interact, publish messages, and discover others that share common interests in an intermittent network using Bluetooth, peer-to-peer WiFi, and infrastructure WiFi.
In recent years, a number of social-aware and social-based routing schemes have leveraged social interactions to deliver data using delay tolerant networks (DTNs) @cite_0 . However, related work has primarily evaluated routing protocols in simulation environments, which provide valuable analyses, but are based on synthetic mobility patterns to emulate node movement and tend to use abstract models to imitate the radio response of real commodity wireless technologies @cite_6 @cite_26 @cite_5 . There are a few studies that have taken on the approach of demonstrating DTNs in realistic environments @cite_22 @cite_25 @cite_14 . However, these studies do not consider other significant aspects, such as user security and privacy along with the limitation of operating with only the epidemic routing scheme. Various middlewares @cite_2 @cite_19 @cite_7 @cite_17 , testbeds @cite_13 @cite_21 @cite_9 @cite_15 @cite_8 @cite_12 @cite_20 @cite_1 , and mobile applications have been developed to address providing deployable delay tolerant networking applications which can operate with minimal infrastructure and effectively evaluate DTN routing protocols.
{ "cite_N": [ "@cite_22", "@cite_2", "@cite_5", "@cite_15", "@cite_20", "@cite_8", "@cite_21", "@cite_17", "@cite_26", "@cite_7", "@cite_6", "@cite_19", "@cite_25", "@cite_12", "@cite_14", "@cite_9", "@cite_1", "@cite_0", "@cite_13" ], "mid": [ "155212612", "2169090952", "2007008132", "", "", "", "2067147398", "", "2021497904", "2028419271", "2114421613", "2024259126", "2151342861", "", "", "", "", "2017478515", "2003597056" ], "abstract": [ "This paper describes the Saami Network Connectivity (SNC) project that seeks to establish Internet communication for the Saami population of Reindeer Herders, who live in remote areas in Swedish Lapland, and relocate their base in accordance with a yearly cycle dictated by the natural behavior of reindeer. This population currently does not have reliable wired, wireless or satellite communication capabilities in major areas within which they work and stay (or would prefer to stay if possible). A radical solution is therefore required, which is compatible with the Saami population's goal to uphold their land by being able to live there and care for the environment. An approach based on the concept of Delay Tolerant Networks is discussed here.", "We consider a mobile ad hoc network setting where Bluetooth enabled mobile devices communicate directly with other devices as they meet opportunistically. We design and implement a novel mobile social networking middleware named MobiClique. MobiClique forms and exploits ad hoc social networks to disseminate content using a store-carry-forward technique. Our approach distinguishes itself from other mobile social software by removing the need for a central server to conduct exchanges, by leveraging existing social networks to bootstrap the system, and by taking advantage of the social network overlay to disseminate content. We also propose an open API to encourage third-party application development. We discuss the system architecture and three example applications. We show experimentally that MobiClique successfully builds and maintains an ad hoc social network leveraging contact opportunities between friends and people sharing interest(s) for content exchanges. Our experience also provides insight into some of the key challenges and short-comings that researchers face when designing and deploying similar systems.", "Many message delivery services are based on publish-subscribe systems designed to distribute updates through centralized infrastructures requiring active Internet connections. For mobile devices, individual nodes should have the ability to propagate messages to interested users over ad-hoc wireless connections thereby removing the dependence on Internet and centralized servers. These nodes are sometimes stationary, but are often mobile, creating intermittent networks of nodes that tend to be socially related. In this paper, we propose LESC, a delay-tolerant message delivery protocol, which facilitates efficient message dissemination in a decentralized, ad-hoc fashion and can be implemented using a commodity mobile communication technology such as Bluetooth LE. By leveraging the frequent collocation of socially related peers, nodes strategically become information carriers with the ability to propagate messages to out of range nodes in the future. We design a discrete event simulator that utilizes actual traveling paths derived from Google Maps. The simulator emulates LESC and the epidemic routing protocol to determine if we can achieve reasonable performance. Related works have approached the problem of publish-subscribe systems on mobile devices, but to the best of our knowledge, have not shown the feasibility of a protocol that can directly be implemented over current commodity wireless technologies. We simulate the protocol in Matlab and allow nodes to have multiple publications and subscriptions simultaneously.", "", "", "", "In this demo we present IBR-DTN for Android: IBR-DTN is a fully featured RFC5050 compliant Bundle Protocol implementation that can run on un-rooted Android devices starting from Android Version 2.3 (Gingerbread). IBR-DTN for Android supports all features of the IBR-DTN version for PCs and embedded systems. It is available in the Google Play Store for free. In addition to the protocol stack we provide two simple real world applications: a text messaging system and a push-to-talk application. They can serve as an example how to build DTN applications for mobile phones as both applications as well as the protocol implementation itself are open sourced.", "", "Delay-tolerant Networking (DTN) enables communication in sparse mobile ad-hoc networks and other challenged environments where traditional networking fails and new routing and application protocols are required. Past experience with DTN routing and application protocols has shown that their performance is highly dependent on the underlying mobility and node characteristics. Evaluating DTN protocols across many scenarios requires suitable simulation tools. This paper presents the Opportunistic Networking Environment (ONE) simulator specifically designed for evaluating DTN routing and application protocols. It allows users to create scenarios based upon different synthetic movement models and real-world traces and offers a framework for implementing routing and application protocols (already including six well-known routing protocols). Interactive visualization and post-processing tools support evaluating experiments and an emulation mode allows the ONE simulator to become part of a real-world DTN testbed. We show sample simulations to demonstrate the simulator's flexible support for DTN protocol evaluation.", "In this paper, we present Mist: a reliable and delay-tolerant middleware for information dissemination between highly mobile devices. Mist provides publish subscribe with guaranteed message delivery in fully connected networks. Through emulation we show how the middleware is effective in static networks, as well as in dynamic topologies with high mobility. We describe how Mist is able to scale using a topic routing mechanism, allowing groups of mobile units to cooperate with infrastructure-based P2P-networks. Finally, we describe recent experiments where Mist has been employed successfully in real-life deployments. The implementation of the middleware, written in Java, is released as open source.", "In this paper, we study the utility of opportunistic communication systems with the co-existence of network infrastructure. We study how some important performance metrics change with varying degrees of infrastructure and mobile nodes willing to participate in the opportunistic forwarding. In doing so, we observe phase transitions in the utility of infrastructure and opportunistic forwarding respectively at different points in the design space. We discuss the implications that this has for the design of future network deployments and how this observation can be used to improve network performance, while keeping cost at a minimum.", "In this work we present a middleware architecture for a mobile peer-to-peer content distribution system. Our architecture allows wireless content dissemination between mobile nodes without relying on infrastructure support. Contents are exchanged opportunistically when nodes are within communication range. Applications access the service of our platform through a publish subscribe interface and therefore do not have to deal with low-level opportunistic networking issues or matching and soliciting of contents. Our architecture consists of three key components. A content structure that facilitates dividing contents into logical topics and allows for efficient matching of content lookups and downloading under sporadic node connectivity. A solicitation protocol that allows nodes to solicit content meta-information in order to discover contents available at a neighboring node and to download content entries disjointedly from different nodes. An API that allows applications to access the system services through a publish subscribe interface. In this work we describe the design and implementation of our architecture. We also discuss potential applications and present evaluation results from profiling of our system.", "DakNet provides extraordinarily low-cost digital communication, letting remote villages leapfrog past the expense of traditional connectivity solutions and begin development of a full-coverage broadband wireless infrastructure. What is the basis for a progressive, market-driven migration from e-governance to universal broadband connectivity that local users will pay for? DakNet, an ad hoc network that uses wireless technology to provide asynchronous digital connectivity, is evidence that the marriage of wireless and asynchronous service may indeed be the beginning of a road to universal broadband connectivity. DakNet has been successfully deployed in remote parts of both India and Cambodia at a cost two orders of magnitude less than that of traditional landline solutions.", "", "", "", "", "In the past few years, more and more researchers have paid close attention to the emerging field of delay tolerant networks (DTNs), in which network often partitions and end-to-end paths do not exist nearly all the time. To cope with these challenges, most routing protocols employ the \"store-carry-forward\" strategy to transmit messages. However, the difficulty of this strategy is how to choose the best relay node and determine the best time to forward messages. Fortunately, social relations among nodes can be used to address these problems. In this paper, we present a comprehensive survey of recent social-aware routing protocols, which offer an insight into how to utilize social relationships to design efficient and applicable routing algorithms in DTNs. First, we review the major practical applications of DTNs. Then, we focus on understanding social ties between nodes and investigating some design-related issues of social-based routing approaches, e.g., the ways to obtain social relations among nodes, the metrics and approaches to identify the characteristics of social ties, the strategies to optimize social-aware routing protocols, and the suitable mobility traces to evaluate these protocols. We also create a taxonomy for social-aware routing protocols according to the sources of social relations. Finally, we outline several open issues and research challenges.", "Today's powerful networked personal computing devices offer a solid technical basis for mobile ad-hoc networking in support of consumer applications independent of operator networks. However,running Internet protocols directly on top ad-hoc routing protocols such as AODV requires a sufficient node density to establishend-to-end paths. In contrast, Delay-tolerant Networking (DTN) allows to exploit device capabilities also in sparse environments. In this demonstration paper, we present a DTN prototype for mobile phones as the most widespread platform for (delay-tolerant) ad-hoc networking and show a sample application that allows bypassing cellular operator infrastructure - with a fallback option incase DTN fails to deliver the information in time." ] }
1703.09026
2949846029
Manual annotations of temporal bounds for object interactions (i.e. start and end times) are typical training input to recognition, localization and detection algorithms. For three publicly available egocentric datasets, we uncover inconsistencies in ground truth temporal bounds within and across annotators and datasets. We systematically assess the robustness of state-of-the-art approaches to changes in labeled temporal bounds, for object interaction recognition. As boundaries are trespassed, a drop of up to 10 is observed for both Improved Dense Trajectories and Two-Stream Convolutional Neural Network. We demonstrate that such disagreement stems from a limited understanding of the distinct phases of an action, and propose annotating based on the Rubicon Boundaries, inspired by a similarly named cognitive model, for consistent temporal bounds of object interactions. Evaluated on a public dataset, we report a 4 increase in overall accuracy, and an increase in accuracy for 55 of classes when Rubicon Boundaries are used for temporal annotations.
* 6pt 4pt The leading work of Satkin and Hebert @cite_14 first pointed out that determining the temporal extent of an action is often subjective, and that action recognition results vary depending on the bounds used for training. They proposed to find the most discriminative portion of each segment for the task of action recognition. Given a loosely trimmed training segment, they exhaustively search for the cropping that leads to the highest classification accuracy, using hand-crafted features such as HOG, HOF @cite_30 and Trajectons @cite_34 . Optimizing bounds to maximize discrimination between class labels has also been attempted by Duchenne @cite_11 , where they refined loosely labeled temporal bounds of actions, estimated from film scripts, to increase accuracy across action classes. Similarly, two works evaluated the optimal segment length for action recognition @cite_7 @cite_21 . From the of the segment, 1 -7 frames were deemed sufficient in @cite_7 , with rapidly diminishing returns as more frames were added. More recently, @cite_21 showed that 15-20 frames were enough to recognize human actions from 3D skeleton joints.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_7", "@cite_21", "@cite_34", "@cite_11" ], "mid": [ "2142194269", "1839676122", "2136853139", "2031334527", "1973166425", "2535977253" ], "abstract": [ "The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.", "In this paper, we present a framework for estimating what portions of videos are most discriminative for the task of action recognition. We explore the impact of the temporal cropping of training videos on the overall accuracy of an action recognition system, and we formalize what makes a set of croppings optimal. In addition, we present an algorithm to determine the best set of croppings for a dataset, and experimentally show that our approach increases the accuracy of various state-of-the-art action recognition techniques.", "Visual recognition of human actions in video clips has been an active field of research in recent years. However, most published methods either analyse an entire video and assign it a single action label, or use relatively large look-ahead to classify each frame. Contrary to these strategies, human vision proves that simple actions can be recognised almost instantaneously. In this paper, we present a system for action recognition from very short sequences (ldquosnippetsrdquo) of 1-10 frames, and systematically evaluate it on standard data sets. It turns out that even local shape and optic flow for a single frame are enough to achieve ap90 correct recognitions, and snippets of 5-7 frames (0.3-0.5 seconds of video) are enough to achieve a performance similar to the one obtainable with the entire video sequence.", "HighlightsEffective method to recognize human actions using 3D skeleton joints.New action feature descriptor, EigenJoints, for action recognition.Accumulated Motion Energy (AME) method to perform informative frames selection.Our proposed approach significantly outperforms the state-of-the-art methods on three public datasets. In this paper, we propose an effective method to recognize human actions using 3D skeleton joints recovered from 3D depth data of RGBD cameras. We design a new action feature descriptor for action recognition based on differences of skeleton joints, i.e., EigenJoints which combine action information including static posture, motion property, and overall dynamics. Accumulated Motion Energy (AME) is then proposed to perform informative frame selection, which is able to remove noisy frames and reduce computational cost. We employ non-parametric Naive-Bayes-Nearest-Neighbor (NBNN) to classify multiple actions. The experimental results on several challenging datasets demonstrate that our approach outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to perform classification in the scenario of online action recognition. We observe that the first 30-40 frames are sufficient to achieve comparable results to that using the entire video sequences on the MSR Action3D dataset.", "The defining feature of video compared to still images is motion, and as such the selection of good motion features for action recognition is crucial, especially for bag of words techniques that rely heavily on their features. Existing motion techniques either assume that a difficult problem like background foreground segmentation has already been solved (contour silhouette based techniques) or are computationally expensive and prone to noise (optical flow). We present a technique for motion based on quantized trajectory snippets of tracked features. These quantized snippets, or trajectons, rely only on simple feature tracking and are computationally efficient. We demonstrate that within a bag of words framework trajectons can match state of the art results, slightly outperforming histogram of optical flow features on the Hollywood Actions dataset. Additionally, we present qualitative results in a video search task on a custom dataset of challenging YouTube videos.", "This paper addresses the problem of automatic temporal annotation of realistic human actions in video using minimal manual supervision. To this end we consider two associated problems: (a) weakly-supervised learning of action models from readily available annotations, and (b) temporal localization of human actions in test videos. To avoid the prohibitive cost of manual annotation for training, we use movie scripts as a means of weak supervision. Scripts, however, provide only implicit, noisy, and imprecise information about the type and location of actions in video. We address this problem with a kernel-based discriminative clustering algorithm that locates actions in the weakly-labeled training data. Using the obtained action samples, we train temporal action detectors and apply them to locate actions in the raw video data. Our experiments demonstrate that the proposed method for weakly-supervised learning of action models leads to significant improvement in action detection. We present detection results for three action classes in four feature length movies with challenging and realistic video data." ] }
1703.09026
2949846029
Manual annotations of temporal bounds for object interactions (i.e. start and end times) are typical training input to recognition, localization and detection algorithms. For three publicly available egocentric datasets, we uncover inconsistencies in ground truth temporal bounds within and across annotators and datasets. We systematically assess the robustness of state-of-the-art approaches to changes in labeled temporal bounds, for object interaction recognition. As boundaries are trespassed, a drop of up to 10 is observed for both Improved Dense Trajectories and Two-Stream Convolutional Neural Network. We demonstrate that such disagreement stems from a limited understanding of the distinct phases of an action, and propose annotating based on the Rubicon Boundaries, inspired by a similarly named cognitive model, for consistent temporal bounds of object interactions. Evaluated on a public dataset, we report a 4 increase in overall accuracy, and an increase in accuracy for 55 of classes when Rubicon Boundaries are used for temporal annotations.
Interestingly, assessing the effect of temporal bounds is still an active research topic within novel deep architectures. Recently, Peng @cite_0 assessed how frame-level classifications using multi-region two-stream CNN are pooled to achieve video-level recognition results. The authors reported that stacking more than 5 frames worsened the action detection and recognition results for the tested datasets, though only compared to a 10-frame stack.
{ "cite_N": [ "@cite_0" ], "mid": [ "2519080876" ], "abstract": [ "We propose a multi-region two-stream R-CNN model for action detection in realistic videos. We start from frame-level action detection based on faster R-CNN [1], and make three contributions: (1) we show that a motion region proposal network generates high-quality proposals , which are complementary to those of an appearance region proposal network; (2) we show that stacking optical flow over several frames significantly improves frame-level action detection; and (3) we embed a multi-region scheme in the faster R-CNN model, which adds complementary information on body parts. We then link frame-level detections with the Viterbi algorithm, and temporally localize an action with the maximum subarray method. Experimental results on the UCF-Sports, J-HMDB and UCF101 action detection datasets show that our approach outperforms the state of the art with a significant margin in both frame-mAP and video-mAP." ] }
1703.09026
2949846029
Manual annotations of temporal bounds for object interactions (i.e. start and end times) are typical training input to recognition, localization and detection algorithms. For three publicly available egocentric datasets, we uncover inconsistencies in ground truth temporal bounds within and across annotators and datasets. We systematically assess the robustness of state-of-the-art approaches to changes in labeled temporal bounds, for object interaction recognition. As boundaries are trespassed, a drop of up to 10 is observed for both Improved Dense Trajectories and Two-Stream Convolutional Neural Network. We demonstrate that such disagreement stems from a limited understanding of the distinct phases of an action, and propose annotating based on the Rubicon Boundaries, inspired by a similarly named cognitive model, for consistent temporal bounds of object interactions. Evaluated on a public dataset, we report a 4 increase in overall accuracy, and an increase in accuracy for 55 of classes when Rubicon Boundaries are used for temporal annotations.
An interesting approach that addressed reliance on training temporal bounds for action recognition and localization is that of Gaidon @cite_17 . They noted that action recognition methods rely on temporal bounds in test videos to be strictly containing an action, and in the same fashion as the training segments. They thus redefined an action as a sequence of key atomic frames, referred to as actoms. The authors learned the optimal sequence of actoms per action class with promising results. More recently, Wang @cite_13 represented actions as a transformation from a precondition state to an effect state. The authors attempted to learn such transformations as well as locate the end of the precondition and the start of the effect. However, both approaches @cite_17 @cite_24 rely on manual annotations of actoms @cite_17 or action segments @cite_13 , which are potentially as subjective as the temporal bounds of the actions themselves .
{ "cite_N": [ "@cite_24", "@cite_13", "@cite_17" ], "mid": [ "", "1947050545", "2084341401" ], "abstract": [ "", "We address the challenging problem of recognizing the camera wearer's actions from videos captured by an egocentric camera. Egocentric videos encode a rich set of signals regarding the camera wearer, including head movement, hand pose and gaze information. We propose to utilize these mid-level egocentric cues for egocentric action recognition. We present a novel set of egocentric features and show how they can be combined with motion and object features. The result is a compact representation with superior performance. In addition, we provide the first systematic evaluation of motion, object and egocentric cues in egocentric action recognition. Our benchmark leads to several surprising findings. These findings uncover the best practices for egocentric actions, with a significant performance boost over all previous state-of-the-art methods on three publicly available datasets.", "We address the problem of localizing actions, such as opening a door, in hours of challenging video data. We propose a model based on a sequence of atomic action units, termed \"actoms,\" that are semantically meaningful and characteristic for the action. Our actom sequence model (ASM) represents an action as a sequence of histograms of actom-anchored visual features, which can be seen as a temporally structured extension of the bag-of-features. Training requires the annotation of actoms for action examples. At test time, actoms are localized automatically based on a nonparametric model of the distribution of actoms, which also acts as a prior on an action's temporal structure. We present experimental results on two recent benchmarks for action localization \"Coffee and Cigarettes\" and the \"DLSBP\" dataset. We also adapt our approach to a classification-by-localization set-up and demonstrate its applicability on the challenging \"Hollywood 2\" dataset. We show that our ASM method outperforms the current state of the art in temporal action localization, as well as baselines that localize actions with a sliding window method." ] }
1703.09026
2949846029
Manual annotations of temporal bounds for object interactions (i.e. start and end times) are typical training input to recognition, localization and detection algorithms. For three publicly available egocentric datasets, we uncover inconsistencies in ground truth temporal bounds within and across annotators and datasets. We systematically assess the robustness of state-of-the-art approaches to changes in labeled temporal bounds, for object interaction recognition. As boundaries are trespassed, a drop of up to 10 is observed for both Improved Dense Trajectories and Two-Stream Convolutional Neural Network. We demonstrate that such disagreement stems from a limited understanding of the distinct phases of an action, and propose annotating based on the Rubicon Boundaries, inspired by a similarly named cognitive model, for consistent temporal bounds of object interactions. Evaluated on a public dataset, we report a 4 increase in overall accuracy, and an increase in accuracy for 55 of classes when Rubicon Boundaries are used for temporal annotations.
Previously, three works noted the challenge and difficulty in defining temporal bounds for egocentric videos @cite_16 @cite_15 @cite_29 . @cite_16 , Spriggs discussed the level of granularity in action labels (e.g. break egg' vs beat egg in a bowl') for the CMU dataset @cite_40 . They also noted the presence of temporally overlapping object interactions (e.g. pour' while stirring'). @cite_38 , multiple annotators were asked to provide temporal bounds for the same object interaction. The authors showed variability in annotations, yet did not detail what instructions were given to annotators when labeling these temporal bounds. @cite_29 , the human ability to order pairwise egocentric segments was evaluated as the snippet length varied. The work showed that human perception improves as the size of the segment increases to 60 frames, then levels off.
{ "cite_N": [ "@cite_38", "@cite_29", "@cite_40", "@cite_15", "@cite_16" ], "mid": [ "2496009737", "2232035143", "105287674", "2198667788", "2387799167" ], "abstract": [ "We present SEMBED, an approach for embedding an egocentric object interaction video in a semantic-visual graph to estimate the probability distribution over its potential semantic labels. When object interactions are annotated using unbounded choice of verbs, we embrace the wealth and ambiguity of these labels by capturing the semantic relationships as well as the visual similarities over motion and appearance features. We show how SEMBED can interpret a challenging dataset of 1225 freely annotated egocentric videos, outperforming SVM classification by more than 5 .", "Given a video of an activity, can we predict what will happen next? In this paper we explore two simple tasks related to temporal prediction in egocentric videos of everyday activities. We provide both human experiments to understand how well people can perform on these tasks and computational models for prediction. Experiments indicate that humans and computers can do well on temporal prediction and that personalization to a particular individual or environment provides significantly increased performance. Developing methods for temporal prediction could have far reaching benefits for robots or intelligent agents to anticipate what a person will do, before they do it.", "", "We present a fully unsupervised approach for the discovery of i) task relevant objects and ii) how these objects have been used. A Task Relevant Object (TRO) is an object, or part of an object, with which a person interacts during task performance. Given egocentric video from multiple operators, the approach can discover objects with which the users interact, both static objects such as a coffee machine as well as movable ones such as a cup. Importantly, we also introduce the term Mode of Interaction (MOI) to refer to the different ways in which TROs are used. Say, a cup can be lifted, washed, or poured into. When harvesting interactions with the same object from multiple operators, common MOIs can be found. Setup and Dataset: Using a wearable camera and gaze tracker (Mobile Eye-XG from ASL), egocentric video is collected of users performing tasks, along with their gaze in pixel coordinates. Six locations were chosen: kitchen, workspace, laser printer, corridor with a locked door, cardiac gym and weight-lifting machine. The Bristol Egocentric Object Interactions Dataset is publically available .", "We bring together ideas from recent work on feature design for egocentric action recognition under one framework by exploring the use of deep convolutional neural networks (CNN). Recent work has shown that features such as hand appearance, object attributes, local hand motion and camera ego-motion are important for characterizing first-person actions. To integrate these ideas under one framework, we propose a twin stream network architecture, where one stream analyzes appearance information and the other stream analyzes motion information. Our appearance stream encodes prior knowledge of the egocentric paradigm by explicitly training the network to segment hands and localize objects. By visualizing certain neuron activation of our network, we show that our proposed architecture naturally learns features that capture object attributes and hand-object configurations. Our extensive experiments on benchmark egocentric action datasets show that our deep architecture enables recognition rates that significantly outperform state-of-the-art techniques -- an average @math increase in accuracy over all datasets. Furthermore, by learning to recognize objects, actions and activities jointly, the performance of individual recognition tasks also increase by @math (actions) and @math (objects). We also include the results of extensive ablative analysis to highlight the importance of network design decisions.." ] }
1703.08493
2950830676
In the field of connectomics, neuroscientists seek to identify cortical connectivity comprehensively. Neuronal boundary detection from the Electron Microscopy (EM) images is often done to assist the automatic reconstruction of neuronal circuit. But the segmentation of EM images is a challenging problem, as it requires the detector to be able to detect both filament-like thin and blob-like thick membrane, while suppressing the ambiguous intracellular structure. In this paper, we propose multi-stage multi-recursive-input fully convolutional networks to address this problem. The multiple recursive inputs for one stage, i.e., the multiple side outputs with different receptive field sizes learned from the lower stage, provide multi-scale contextual boundary information for the consecutive learning. This design is biologically-plausible, as it likes a human visual system to compare different possible segmentation solutions to address the ambiguous boundary issue. Our multi-stage networks are trained end-to-end. It achieves promising results on two public available EM segmentation datasets, the mouse piriform cortex dataset and the ISBI 2012 EM dataset.
Segmenting EM images of neural tissue is an important step to understand the circuit structure and the function of the brain @cite_1 @cite_15 . Early work of this topic needs to impose experts' knowledge. For example, users need to label intracellular regions to allow graph cut segmentation, and also correct segmentation errors afterwards @cite_25 . To reduce the amount of human labor required @cite_10 , automatic neuron segmentation became an active research direction, which follows the pipeline that detects neuronal boundaries by machine learning algorithms @cite_42 @cite_6 @cite_16 @cite_12 and then applies post-processing algorithms, such as watershed @cite_9 @cite_26 @cite_18 , hierarchical clustering @cite_29 @cite_3 and graph cut @cite_37 algorithms, to boundary maps to obtain neuron segments. But early methods, which are based on hand-crafted features, tend to fail when the membrane is ambiguous.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_37", "@cite_9", "@cite_42", "@cite_1", "@cite_29", "@cite_6", "@cite_3", "@cite_15", "@cite_16", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "1507090658", "143142918", "1483512324", "2128356031", "39908396", "1889898024", "2080858319", "1924015474", "1976467070", "2582996697", "2110516302", "2016323639", "2134459252", "1871425898" ], "abstract": [ "We present a method for hierarchical image segmentation that defines a disaffinity graph on the image, over-segments it into watershed basins, defines a new graph on the basins, and then merges basins with a modified, size-dependent version of single linkage clustering. The quasilinear runtime of the method makes it suitable for segmenting large images. We illustrate the method on the challenging problem of segmenting 3D electron microscopic brain images.", "We present a new algorithm for automatic and interactive segmentation of neuron structures from electron microscopy (EM) images. Our method selects a collection of nodes from the watershed merging tree as the proposed segmentation. This is achieved by building a conditional random field (CRF) whose underlying graph is the merging tree. The maximum a posteriori (MAP) prediction of the CRF is the output segmentation. Our algorithm outperforms state-of-the-art methods. Both the inference and the training are very efficient as the graph is tree-structured. Furthermore, we develop an interactive segmentation framework which selects uncertain regions for a user to proofread. The uncertainty is measured by the marginals of the graphical model. Based on user corrections, our framework modifies the merging tree and thus improves the segmentation globally.", "We present a new automated neuron segmentation algorithm for isotropic 3D electron microscopy data. We cast the problem into the asymmetric multiway cut framework. The latter combines boundary-based segmentation (clustering) with region-based segmentation (semantic labeling) in a single problem and objective function. This joint formulation allows us to augment local boundary evidence with higherlevel biological priors, such as membership to an axonic or dendritic neurite. Joint optimization enforces consistency between evidence and priors, leading to correct resolution of many difficult boundary configurations. We show experimentally on a FIB SEM dataset of mouse cortex that the new approach outperforms existing hierarchical segmentation and multicut algorithms which only use boundary evidence.", "The watershed is one of the latest segmentation tools developed in mathematical morphology. In order to prevent its oversegmentation, the notion of dynamics of a minimum, based on geodesic reconstruction, has been proposed. In this paper, we extend the notion of dynamics to the contour arcs. This notion acts as a measure of the saliency of the contour. Contrary to the dynamics of minima, our concept reflects the extension and shape of the corresponding object in the image. This representation is also much more natural, because it is expressed in terms of partitions of the plane, i.e., segmentations. A hierarchical segmentation process is then derived, which gives a compact description of the image, containing all the segmentations one can obtain by the notion of dynamics, by means of a simple thresholding. Finally, efficient algorithms for computing the geodesic reconstruction as well as the dynamics of contours are presented.", "Connectomics based on high resolution ssTEM imagery requires reconstruction of the neuron geometry from histological slides. We present an approach for the automatic membrane segmentation in anisotropic stacks of electron microscopy brain tissue sections. The ambiguities in neuronal segmentation of a section are resolved by using the context from the neighboring sections. We find the global dense correspondence between the sections by SIFT Flow algorithm, evaluate the features of the corresponding pixels and use them to perform the segmentation. Our method is 3.6 and 6.4 more accurate in two different accuracy metrics than the algorithm with no context from other sections.", "Efforts to automate the reconstruction of neural circuits from 3D electron microscopic (EM) brain images are critical for the field of connectomics. An important computation for reconstruction is the detection of neuronal boundaries. Images acquired by serial section EM, a leading 3D EM technique, are highly anisotropic, with inferior quality along the third dimension. For such images, the 2D max-pooling convolutional network has set the standard for performance at boundary detection. Here we achieve a substantial gain in accuracy through three innovations. Following the trend towards deeper networks for object recognition, we use a much deeper network than previously employed for boundary detection. Second, we incorporate 3D as well as 2D filters, to enable computations that use 3D context. Finally, we adopt a recursively trained architecture in which a first network generates a preliminary boundary map that is provided as input along with the original image to a second network that generates a final boundary map. Back-propagation training is accelerated by ZNN, a new implementation of 3D convolutional networks that uses multicore CPU parallelism for speed. Our hybrid 2D-3D architecture could be more generally applicable to other types of anisotropic 3D images, including video, and our recursive framework for any image labeling problem.", "We aim to improve segmentation through the use of machine learning tools during region agglomeration. We propose an active learning approach for performing hierarchical agglomerative segmentation from superpixels. Our method combines multiple features at all scales of the agglomerative process, works for data with an arbitrary number of dimensions, and scales to very large datasets. We advocate the use of variation of information to measure segmentation accuracy, particularly in 3D electron microscopy (EM) images of neural tissue, and using this metric demonstrate an improvement over competing algorithms in EM and natural images.", "In neuroanatomy, automatic geometry extraction of neurons from electron microscopy images is becoming one of the main limiting factors in getting new insights into the functional structure of the brain. We propose a novel framework for tracing neuronal processes over serial sections for 3d reconstructions. The automatic processing pipeline combines the probabilistic output of a random forest classifier with geometrical consistency constraints which take the geometry of whole sections into account. Our experiments demonstrate significant improvement over grouping by Euclidean distance, reducing the split and merge error per object by a factor of two.", "The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy.", "Electron microscopic connectomics is an ambitious research direction with the goal of studying comprehensive brain connectivity maps by using high-throughput, nano-scale microscopy. One of the main challenges in connectomics research is developing scalable image analysis algorithms that require minimal user intervention. Recently, deep learning has drawn much attention in computer vision because of its exceptional performance in image classification tasks. For this reason, its application to connectomic analyses holds great promise, as well. In this paper, we introduce a novel deep neural network architecture, FusionNet, for the automatic segmentation of neuronal structures in connectomics data. FusionNet leverages the latest advances in machine learning, such as semantic segmentation and residual neural networks, with the novel introduction of summation-based skip connections to allow a much deeper network architecture for a more accurate segmentation. We demonstrate the performance of the proposed method by comparing it with state-of-the-art electron microscopy (EM) segmentation methods from the ISBI EM segmentation challenge. We also show the segmentation results on two different tasks including cell membrane and cell body segmentation and a statistical analysis of cell morphology.", "In this paper we present a novel class of so-called Radon-Like features, which allow for aggregation of spatially distributed image statistics into compact feature descriptors. Radon-Like features, which can be efficiently computed, lend themselves for use with both supervised and unsupervised learning methods. Here we describe various instantiations of these features and demonstrate there usefulness in context of neural connectivity analysis, i.e. Connectomics, in electron micrographs. Through various experiments on simulated as well as real data we establish the efficacy of the proposed features in various tasks like cell membrane enhancement, mitochondria segmentation, cell background segmentation, and vesicle cluster detection as compared to various other state-of-the-art techniques.", "Neuronal networks are high-dimensional graphs that are packed into three-dimensional nervous tissue at extremely high density. Comprehensively mapping these networks is therefore a major challenge. Although recent developments in volume electron microscopy imaging have made data acquisition feasible for circuits comprising a few hundreds to a few thousands of neurons, data analysis is massively lagging behind. The aim of this Perspective is to summarize and quantify the challenges for data analysis in cellular-resolution connectomics and describe current solutions involving online crowd-sourcing and machine-learning approaches.", "In many neurophysiological studies, understanding the neuronal circuitry of the brain requires detailed 3D models of the nerve cells and their synapses. Typically, researchers build the 3D models by manually tracing the 2D cross-sectional profiles of the 3D structures from serial electron micrograph (EM) stacks and then construct the models from these 2D contours. While current computer-aided techniques can reduce the tracing time, they often require extensive user interaction. We propose a segmentation framework to extract the 2D profiles that is both fast and requires a minimal amount of user interaction. The framework uses graph cuts to minimize an energy defined over the image intensity and the flux of the intensity gradient field. Furthermore, to correct segmentation errors, our framework allows for efficient and intuitive editing of the initial results.", "Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output of each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images." ] }
1703.08493
2950830676
In the field of connectomics, neuroscientists seek to identify cortical connectivity comprehensively. Neuronal boundary detection from the Electron Microscopy (EM) images is often done to assist the automatic reconstruction of neuronal circuit. But the segmentation of EM images is a challenging problem, as it requires the detector to be able to detect both filament-like thin and blob-like thick membrane, while suppressing the ambiguous intracellular structure. In this paper, we propose multi-stage multi-recursive-input fully convolutional networks to address this problem. The multiple recursive inputs for one stage, i.e., the multiple side outputs with different receptive field sizes learned from the lower stage, provide multi-scale contextual boundary information for the consecutive learning. This design is biologically-plausible, as it likes a human visual system to compare different possible segmentation solutions to address the ambiguous boundary issue. Our multi-stage networks are trained end-to-end. It achieves promising results on two public available EM segmentation datasets, the mouse piriform cortex dataset and the ISBI 2012 EM dataset.
The recursive training framework has been applied to many computer vision tasks, such as image labeling @cite_20 , instance segmentation @cite_34 , human pose estimation @cite_24 , and face alignment @cite_41 @cite_4 . However, they trained the recursive framework stepwise and only used one single recursive input. There are two image segmentation methods @cite_28 @cite_39 also fed multi-recursive inputs into next stage in the recursive training framework. But, their strategies to generate the multi-recursive inputs are different from ours. In the first one @cite_28 , the multi-recursive inputs for one stage were obtained by applying a series of Gaussian filters to the single output of the previous stage, but ours are the multiple outputs supervised at different levels in a deep network. The second one @cite_39 downsampled an original input image into multiple input images with different resolutions and obtained the multi-recursive inputs from these multiple input images, but ours are computed from the same input image by using the hierarchy of a deep net. In addition, the multiple stages in their methods were trained in a stepwise manner. On the contrary, we embed the recursive learning in a deep network and first learn it in an end-to-end fashion.
{ "cite_N": [ "@cite_4", "@cite_41", "@cite_28", "@cite_39", "@cite_24", "@cite_34", "@cite_20" ], "mid": [ "2138406903", "", "", "2106146968", "2121557314", "2253218192", "2122006243" ], "abstract": [ "We present a very efficient, highly accurate, “Explicit Shape Regression” approach for face alignment. Unlike previous regression-based approaches, we directly learn a vectorial regression function to infer the whole facial shape (a set of facial landmarks) from the image and explicitly minimize the alignment errors over the training data. The inherent shape constraint is naturally encoded into the regressor in a cascaded learning framework and applied from coarse to fine during the test, without using a fixed parametric shape model as in most previous methods. To make the regression more effective and efficient, we design a two-level boosted regression, shape-indexed features and a correlation-based feature selection method. This combination enables us to learn accurate models from large training data in a short time (20 minutes for 2,000 training images), and run regression extremely fast in test (15 ms for a 87 landmarks shape). Experiments on challenging data show that our approach significantly outperforms the state-of-the-art in terms of both accuracy and efficiency.", "", "", "Contextual information plays an important role in solving vision problems such as image segmentation. However, extracting contextual information and using it in an effective way remains a difficult problem. To address this challenge, we propose a multi-resolution contextual framework, called cascaded hierarchical model (CHM), which learns contextual information in a hierarchical framework for image segmentation. At each level of the hierarchy, a classifier is trained based on down sampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. We repeat this procedure by cascading the hierarchical framework to improve the segmentation accuracy. Multiple classifiers are learned in the CHM, therefore, a fast and accurate classifier is required to make the training tractable. The classifier also needs to be robust against over fitting due to the large number of parameters learned during training. We introduce a novel classification scheme, called logistic disjunctive normal networks (LDNN), which consists of one adaptive layer of feature detectors implemented by logistic sigmoid functions followed by two fixed layers of logical units that compute conjunctions and disjunctions, respectively. We demonstrate that LDNN outperforms state-of-the-art classifiers and can be used in the CHM to improve object segmentation performance.", "The launch of Xbox Kinect has built a very successful computer vision product and made a big impact to the gaming industry; this sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when faced severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within specific human action domain. Our algorithm is illustrated on both joint-based skeleton correction and tag prediction. In the experiments, significant improvement is observed over the contemporary approaches, including what is delivered by the current Kinect system.", "Existing methods for pixel-wise labelling tasks generally disregard the underlying structure of labellings, often leading to predictions that are visually implausible. While incorporating structure into the model should improve prediction quality, doing so is challenging - manually specifying the form of structural constraints may be impractical and inference often becomes intractable even if structural constraints are given. We sidestep this problem by reducing structured prediction to a sequence of unconstrained prediction problems and demonstrate that this approach is capable of automatically discovering priors on shape, contiguity of region predictions and smoothness of region contours from data without any a priori specification. On the instance segmentation task, this method outperforms the state-of-the-art, achieving a mean @math of 63.6 at 50 overlap and 43.3 at 70 overlap.", "The notion of using context information for solving high-level vision problems has been increasingly realized in the field. However, how to learn an effective and efficient context model, together with the image appearance, remains mostly unknown. The current literature using Markov random fields (MRFs) and conditional random fields (CRFs) often involves specific algorithm design, in which the modeling and computing stages are studied in isolation. In this paper, we propose an auto-context algorithm. Given a set of training images and their corresponding label maps, we first learn a classifier on local image patches. The discriminative probability (or classification confidence) maps by the learned classifier are then used as context information, in addition to the original image patches, to train a new classifier. The algorithm then iterates to approach the ground truth. Auto-context learns an integrated low-level and context model, and is very general and easy to implement. Under nearly the identical parameter setting in the training, we apply the algorithm on three challenging vision applications: object segmentation, human body configuration, and scene region labeling. It typically takes about 30 70 seconds to run the algorithm in testing. Moreover, the scope of the proposed algorithm goes beyond high-level vision. It has the potential to be used for a wide variety of problems of multi-variate labeling." ] }
1703.08493
2950830676
In the field of connectomics, neuroscientists seek to identify cortical connectivity comprehensively. Neuronal boundary detection from the Electron Microscopy (EM) images is often done to assist the automatic reconstruction of neuronal circuit. But the segmentation of EM images is a challenging problem, as it requires the detector to be able to detect both filament-like thin and blob-like thick membrane, while suppressing the ambiguous intracellular structure. In this paper, we propose multi-stage multi-recursive-input fully convolutional networks to address this problem. The multiple recursive inputs for one stage, i.e., the multiple side outputs with different receptive field sizes learned from the lower stage, provide multi-scale contextual boundary information for the consecutive learning. This design is biologically-plausible, as it likes a human visual system to compare different possible segmentation solutions to address the ambiguous boundary issue. Our multi-stage networks are trained end-to-end. It achieves promising results on two public available EM segmentation datasets, the mouse piriform cortex dataset and the ISBI 2012 EM dataset.
Our work is related to the recursively trained network proposed in @cite_1 , which trains a Very Deep 2D (VD2D) network first, then a Very Deep 2D-3D (VD2D3D) network initialized with learned 2D representations from VD2D network is trained to generate the boundary map. There are two important differences between the networks in @cite_1 and our method. (1). We use multiple recursive inputs with different receptive field sizes to incorporate multi-level contextual boundary information learned from the previous stage, while VD2D3D only uses the single output of VD2D as its recursive input. (2). We train our network in an end-to-end fashion to co-enhance the learning ability (e.g., detect membranes while suppress intracellular structure) of all stages, while @cite_1 sequentially learns deep networks. Benefited from end-to-end training and multiple recursive inputs, our networks can achieve better performance than VD2D3D while only using 2D EM images.
{ "cite_N": [ "@cite_1" ], "mid": [ "1889898024" ], "abstract": [ "Efforts to automate the reconstruction of neural circuits from 3D electron microscopic (EM) brain images are critical for the field of connectomics. An important computation for reconstruction is the detection of neuronal boundaries. Images acquired by serial section EM, a leading 3D EM technique, are highly anisotropic, with inferior quality along the third dimension. For such images, the 2D max-pooling convolutional network has set the standard for performance at boundary detection. Here we achieve a substantial gain in accuracy through three innovations. Following the trend towards deeper networks for object recognition, we use a much deeper network than previously employed for boundary detection. Second, we incorporate 3D as well as 2D filters, to enable computations that use 3D context. Finally, we adopt a recursively trained architecture in which a first network generates a preliminary boundary map that is provided as input along with the original image to a second network that generates a final boundary map. Back-propagation training is accelerated by ZNN, a new implementation of 3D convolutional networks that uses multicore CPU parallelism for speed. Our hybrid 2D-3D architecture could be more generally applicable to other types of anisotropic 3D images, including video, and our recursive framework for any image labeling problem." ] }
1703.08524
2603454828
A variety of real-world processes (over networks) produce sequences of data whose complex temporal dynamics need to be studied. More especially, the event timestamps can carry important information about the underlying network dynamics, which otherwise are not available from the time-series evenly sampled from continuous signals. Moreover, in most complex processes, event sequences and evenly-sampled times series data can interact with each other, which renders joint modeling of those two sources of data necessary. To tackle the above problems, in this paper, we utilize the rich framework of (temporal) point processes to model event data and timely update its intensity function by the synergic twin Recurrent Neural Networks (RNNs). In the proposed architecture, the intensity function is synergistically modulated by one RNN with asynchronous events as input and another RNN with time series as input. Furthermore, to enhance the interpretability of the model, the attention mechanism for the neural point process is introduced. The whole model with event type and timestamp prediction output layers can be trained end-to-end and allows a black-box treatment for modeling the intensity. We substantiate the superiority of our model in synthetic data and three real-world benchmark datasets.
The building block of our model is the Recurrent Neural Networks (RNNs) @cite_9 @cite_6 and its modern variants , Long Short-Term Memory (LSTM) units @cite_32 @cite_1 and Gated Recurrent Units (GRU) @cite_0 . RNNs are dynamical systems whose next state and output depend on the present network state and input, which are more general models than the feed-forward networks. RNNs have long been explored in perceptual applications for many decades, however it can be very difficult for training RNNs to learn long-range dynamics in part due to the vanishing and exploding gradients problem. LSTMs provide a solution by incorporating memory units that allow the network to learn when to forget previous hidden states and when to update hidden states given new information. Recently, RNNs and LSTMs have been successfully applied in large-scale vision @cite_36 , speech @cite_45 and language @cite_41 problems.
{ "cite_N": [ "@cite_36", "@cite_41", "@cite_9", "@cite_1", "@cite_32", "@cite_6", "@cite_0", "@cite_45" ], "mid": [ "1850742715", "2949888546", "2110485445", "1810943226", "", "", "1924770834", "" ], "abstract": [ "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type token distinction.", "This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles.", "", "", "In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.", "" ] }
1703.08524
2603454828
A variety of real-world processes (over networks) produce sequences of data whose complex temporal dynamics need to be studied. More especially, the event timestamps can carry important information about the underlying network dynamics, which otherwise are not available from the time-series evenly sampled from continuous signals. Moreover, in most complex processes, event sequences and evenly-sampled times series data can interact with each other, which renders joint modeling of those two sources of data necessary. To tackle the above problems, in this paper, we utilize the rich framework of (temporal) point processes to model event data and timely update its intensity function by the synergic twin Recurrent Neural Networks (RNNs). In the proposed architecture, the intensity function is synergistically modulated by one RNN with asynchronous events as input and another RNN with time series as input. Furthermore, to enhance the interpretability of the model, the attention mechanism for the neural point process is introduced. The whole model with event type and timestamp prediction output layers can be trained end-to-end and allows a black-box treatment for modeling the intensity. We substantiate the superiority of our model in synthetic data and three real-world benchmark datasets.
: RNNs have been a long time a natural tool for standard time series modeling and prediction @cite_28 @cite_14 , whereby the indexed series data point is fed as input to an (unfold) RNN. In a broader sense, video frames can also be treated as time series and RNN are widely used in recent visual analytics works @cite_46 and so for speech @cite_45 . RNNs are also intensively adopted for sequence modeling tasks @cite_0 when only order information is considered.
{ "cite_N": [ "@cite_14", "@cite_28", "@cite_0", "@cite_45", "@cite_46" ], "mid": [ "2110371102", "2110242546", "1924770834", "", "2174887554" ], "abstract": [ "Cooperative coevolution decomposes a problem into subcomponents and employs evolutionary algorithms for solving them. Cooperative coevolution has been effective for evolving neural networks. Different problem decomposition methods in cooperative coevolution determine how a neural network is decomposed and encoded which affects its performance. A good problem decomposition method should provide enough diversity and also group interacting variables which are the synapses in the neural network. Neural networks have shown promising results in chaotic time series prediction. This work employs two problem decomposition methods for training Elman recurrent neural networks on chaotic time series problems. The Mackey-Glass, Lorenz and Sunspot time series are used to demonstrate the performance of the cooperative neuro-evolutionary methods. The results show improvement in performance in terms of accuracy when compared to some of the methods from literature.", "We propose a robust learning algorithm and apply it to recurrent neural networks. This algorithm is based on filtering outliers from the data and then estimating parameters from the filtered data. The filtering removes outliers from both the target function and the inputs of the neural network. The filtering is soft in that some outliers are neither completely rejected nor accepted. To show the need for robust recurrent networks, we compare the predictive ability of least squares estimated recurrent networks on synthetic data and on the Puget Power Electric Demand time series. These investigations result in a class of recurrent neural networks, NARMA(p,q), which show advantages over feedforward neural networks for time series with a moving average component. Conventional least squares methods of fitting NARMA(p,q) neural network models are shown to suffer a lack of robustness towards outliers. This sensitivity to outliers is demonstrated on both the synthetic and real data sets. Filtering the Puget Power Electric Demand time series is shown to automatically remove the outliers due to holidays. Neural networks trained on filtered data are then shown to give better predictions than neural networks trained on unfiltered time series. >", "In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.", "", "Anticipating the future actions of a human is a widely studied problem in robotics that requires spatio-temporal reasoning. In this work we propose a deep learning approach for anticipation in sensory-rich robotics applications. We introduce a sensory-fusion architecture which jointly learns to anticipate and fuse information from multiple sensory streams. Our architecture consists of Recurrent Neural Networks (RNNs) that use Long Short-Term Memory (LSTM) units to capture long temporal dependencies. We train our architecture in a sequence-to-sequence prediction manner, and it explicitly learns to predict the future given only a partial temporal context. We further introduce a novel loss layer for anticipation which prevents over-fitting and encourages early anticipation. We use our architecture to anticipate driving maneuvers several seconds before they happen on a natural driving data set of 1180 miles. The context for maneuver anticipation comes from multiple sensors installed on the vehicle. Our approach shows significant improvement over the state-of-the-art in maneuver anticipation by increasing the precision from 77.4 to 90.5 and recall from 71.2 to 87.4 ." ] }
1703.08524
2603454828
A variety of real-world processes (over networks) produce sequences of data whose complex temporal dynamics need to be studied. More especially, the event timestamps can carry important information about the underlying network dynamics, which otherwise are not available from the time-series evenly sampled from continuous signals. Moreover, in most complex processes, event sequences and evenly-sampled times series data can interact with each other, which renders joint modeling of those two sources of data necessary. To tackle the above problems, in this paper, we utilize the rich framework of (temporal) point processes to model event data and timely update its intensity function by the synergic twin Recurrent Neural Networks (RNNs). In the proposed architecture, the intensity function is synergistically modulated by one RNN with asynchronous events as input and another RNN with time series as input. Furthermore, to enhance the interpretability of the model, the attention mechanism for the neural point process is introduced. The whole model with event type and timestamp prediction output layers can be trained end-to-end and allows a black-box treatment for modeling the intensity. We substantiate the superiority of our model in synthetic data and three real-world benchmark datasets.
: In contrast, event sequence with timestamp about their occurrence, which are asynchronously and randomly distributed over the continuous time space, is another typical input type for RNNs @cite_11 @cite_52 (despite its title for 'time series'). One key differentiation against the first scenario is that the timestamp or time duration between events (together with other features) is taken as input to the RNNs. By doing so, (long-range) event dependency can be effectively encoded.
{ "cite_N": [ "@cite_52", "@cite_11" ], "mid": [ "2517259736", "2509830164" ], "abstract": [ "Accuracy and interpretation are two goals of any successful predictive models. Most existing works have to suffer the tradeoff between the two by either picking complex black box models such as recurrent neural networks (RNN) or relying on less accurate traditional models with better interpretation such as logistic regression. To address this dilemma, we present REverse Time AttentIoN model (RETAIN) for analyzing Electronic Health Records (EHR) data that achieves high accuracy while remaining clinically interpretable. RETAIN is a two-level neural attention model that can find influential past visits and significant clinical variables within those visits (e.g,. key diagnoses). RETAIN mimics physician practice by attending the EHR data in a reverse time order so that more recent clinical visits will likely get higher attention. Experiments on a large real EHR dataset of 14 million visits from 263K patients over 8 years confirmed the comparable predictive accuracy and computational scalability to the state-of-the-art methods such as RNN. Finally, we demonstrate the clinical interpretation with concrete examples from RETAIN.", "Large volumes of event data are becoming increasingly available in a wide variety of applications, such as healthcare analytics, smart cities and social network analysis. The precise time interval or the exact distance between two events carries a great deal of information about the dynamics of the underlying systems. These characteristics make such data fundamentally different from independently and identically distributed data and time-series data where time and space are treated as indexes rather than random variables. Marked temporal point processes are the mathematical framework for modeling event data with covariates. However, typical point process models often make strong assumptions about the generative processes of the event data, which may or may not reflect the reality, and the specifically fixed parametric assumptions also have restricted the expressive power of the respective processes. Can we obtain a more expressive model of marked temporal point processes? How can we learn such a model from massive data? In this paper, we propose the Recurrent Marked Temporal Point Process (RMTPP) to simultaneously model the event timings and the markers. The key idea of our approach is to view the intensity function of a temporal point process as a nonlinear function of the history, and use a recurrent neural network to automatically learn a representation of influences from the event history. We develop an efficient stochastic gradient algorithm for learning the model parameters which can readily scale up to millions of events. Using both synthetic and real world datasets, we show that, in the case where the true models have parametric specifications, RMTPP can learn the dynamics of such models without the need to know the actual parametric forms; and in the case where the true models are unknown, RMTPP can also learn the dynamics and achieve better predictive performance than other parametric alternatives based on particular prior assumptions." ] }
1703.08524
2603454828
A variety of real-world processes (over networks) produce sequences of data whose complex temporal dynamics need to be studied. More especially, the event timestamps can carry important information about the underlying network dynamics, which otherwise are not available from the time-series evenly sampled from continuous signals. Moreover, in most complex processes, event sequences and evenly-sampled times series data can interact with each other, which renders joint modeling of those two sources of data necessary. To tackle the above problems, in this paper, we utilize the rich framework of (temporal) point processes to model event data and timely update its intensity function by the synergic twin Recurrent Neural Networks (RNNs). In the proposed architecture, the intensity function is synergistically modulated by one RNN with asynchronous events as input and another RNN with time series as input. Furthermore, to enhance the interpretability of the model, the attention mechanism for the neural point process is introduced. The whole model with event type and timestamp prediction output layers can be trained end-to-end and allows a black-box treatment for modeling the intensity. We substantiate the superiority of our model in synthetic data and three real-world benchmark datasets.
Prediction accuracy and model interpretability are two goals of many successful predictive methods. Existing works often have to suffer the tradeoff between the two by either picking complex black box models such as deep neural network or relying on traditional models with better interpretation such as Logistic Regression often with less accuracy compared with state-of-the-art deep neural network models. Despite the promising gain in accuracy, RNNs are relatively difficult to interpret. There have been several attempts to interpret RNNs @cite_52 @cite_10 @cite_53 . However, they either compute the attention score by the same function regardless of the affected point's dimension @cite_52 , or only consider the hidden state of the decoder for sequence prediction @cite_10 @cite_53 . As for multi-dimensional point process, past events shall influence the intensity function differently for each dimension. As a result, we explicitly assign different attention function for each dimension which is modeled by respective intensity functions, thus leading to an infectivity matrix based attention mechanism which will be detailed later in this paper.
{ "cite_N": [ "@cite_53", "@cite_10", "@cite_52" ], "mid": [ "1923211482", "2950178297", "2517259736" ], "abstract": [ "Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. In this paper we focus on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition . All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks , along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "Accuracy and interpretation are two goals of any successful predictive models. Most existing works have to suffer the tradeoff between the two by either picking complex black box models such as recurrent neural networks (RNN) or relying on less accurate traditional models with better interpretation such as logistic regression. To address this dilemma, we present REverse Time AttentIoN model (RETAIN) for analyzing Electronic Health Records (EHR) data that achieves high accuracy while remaining clinically interpretable. RETAIN is a two-level neural attention model that can find influential past visits and significant clinical variables within those visits (e.g,. key diagnoses). RETAIN mimics physician practice by attending the EHR data in a reverse time order so that more recent clinical visits will likely get higher attention. Experiments on a large real EHR dataset of 14 million visits from 263K patients over 8 years confirmed the comparable predictive accuracy and computational scalability to the state-of-the-art methods such as RNN. Finally, we demonstrate the clinical interpretation with concrete examples from RETAIN." ] }
1703.08524
2603454828
A variety of real-world processes (over networks) produce sequences of data whose complex temporal dynamics need to be studied. More especially, the event timestamps can carry important information about the underlying network dynamics, which otherwise are not available from the time-series evenly sampled from continuous signals. Moreover, in most complex processes, event sequences and evenly-sampled times series data can interact with each other, which renders joint modeling of those two sources of data necessary. To tackle the above problems, in this paper, we utilize the rich framework of (temporal) point processes to model event data and timely update its intensity function by the synergic twin Recurrent Neural Networks (RNNs). In the proposed architecture, the intensity function is synergistically modulated by one RNN with asynchronous events as input and another RNN with time series as input. Furthermore, to enhance the interpretability of the model, the attention mechanism for the neural point process is introduced. The whole model with event type and timestamp prediction output layers can be trained end-to-end and allows a black-box treatment for modeling the intensity. We substantiate the superiority of our model in synthetic data and three real-world benchmark datasets.
2) @cite_16 @cite_27 : the model captures the rich-get-richer' mechanism characterized by a compact intensity function, which is recently used for popularity prediction @cite_27 .
{ "cite_N": [ "@cite_27", "@cite_16" ], "mid": [ "1508765177", "2145037371" ], "abstract": [ "An ability to predict the popularity dynamics of individual items within a complex evolving system has important implications in an array of areas. Here we propose a generative probabilistic framework using a reinforced Poisson process to explicitly model the process through which individual items gain their popularity. This model distinguishes itself from existing models via its capability of modeling the arrival process of popularity and its remarkable power at predicting the popularity of individual items. It possesses the flexibility of applying Bayesian treatment to further improve the predictive power using a conjugate prior. Extensive experiments on a longitudinal citation dataset demonstrate that this model consistently outperforms existing popularity prediction methods.", "The models surveyed include generalized Polya urns, reinforced random walks, interacting urn models, and continuous reinforced processes. Emphasis is on methods and results, with sketches provided of some proofs. Applications are discussed in statistics, biology, economics and a number of other areas." ] }
1703.08524
2603454828
A variety of real-world processes (over networks) produce sequences of data whose complex temporal dynamics need to be studied. More especially, the event timestamps can carry important information about the underlying network dynamics, which otherwise are not available from the time-series evenly sampled from continuous signals. Moreover, in most complex processes, event sequences and evenly-sampled times series data can interact with each other, which renders joint modeling of those two sources of data necessary. To tackle the above problems, in this paper, we utilize the rich framework of (temporal) point processes to model event data and timely update its intensity function by the synergic twin Recurrent Neural Networks (RNNs). In the proposed architecture, the intensity function is synergistically modulated by one RNN with asynchronous events as input and another RNN with time series as input. Furthermore, to enhance the interpretability of the model, the attention mechanism for the neural point process is introduced. The whole model with event type and timestamp prediction output layers can be trained end-to-end and allows a black-box treatment for modeling the intensity. We substantiate the superiority of our model in synthetic data and three real-world benchmark datasets.
3) @cite_50 : Recently, Hawkes process has received a wide attention in network cascades modeling @cite_38 @cite_49 , community structure @cite_47 , viral diffusion and activity shaping @cite_25 , criminology @cite_18 , optimization and intervention in social networks @cite_31 , recommendation systems @cite_3 , and verification of crowd generated data @cite_37 . As an illustration example intensively used in this paper, we particularly write out its intensity function is: where @math is the infectivity matrix, indicating the directional influence strength from dimension @math to @math . It explicitly uses a triggering term to model the excitation effect from history events where the parameter @math denotes the decaying bandwidth. The model is originally motivated to analyze the earthquake and its aftershocks @cite_8 .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_31", "@cite_37", "@cite_8", "@cite_3", "@cite_49", "@cite_50", "@cite_47", "@cite_25" ], "mid": [ "2101645017", "", "2435064600", "2545641146", "2064758233", "2949332622", "1724705265", "2069849731", "2400109989", "" ], "abstract": [ "How will the behaviors of individuals in a social network be influenced by their neighbors, the authorities and the communities in a quantitative way? Such critical and valuable knowledge is unfortunately not readily accessible and we tend to only observe its manifestation in the form of recurrent and time-stamped events occurring at the individuals involved in the social network. It is an important yet challenging problem to infer the underlying network of social inference based on the temporal patterns of those historical events that we can observe. In this paper, we propose a convex optimization approach to discover the hidden network of social influence by modeling the recurrent events at different individuals as multidimensional Hawkes processes, emphasizing the mutual-excitation nature of the dynamics of event occurrence. Furthermore, our estimation procedure, using nuclear and l1 norm regularization simultaneously on the parameters, is able to take into account the prior knowledge of the presence of neighbor interaction, authority influence, and community coordination in the social network. To efficiently solve the resulting optimization problem, we also design an algorithm ADM4 which combines techniques of alternating direction method of multipliers and majorization minimization. We experimented with both synthetic and real world data sets, and showed that the proposed method can discover the hidden network more accurately and produce a better predictive model than several baselines.", "", "We consider the problem of how to optimize multi-stage campaigning over social networks. The dynamic programming framework is employed to balance the high present reward and large penalty on low future outcome in the presence of extensive uncertainties. In particular, we establish theoretical foundations of optimal campaigning over social networks where the user activities are modeled as a multivariate Hawkes process, and we derive a time dependent linear relation between the intensity of exogenous events and several commonly used objective functions of campaigning. We further develop a convex dynamic programming framework for determining the optimal intervention policy that prescribes the required level of external drive at each stage for the desired campaigning result. Experiments on both synthetic data and the real-world MemeTracker dataset show that our algorithm can steer the user activities for optimal campaigning much more accurately than baselines.", "Online knowledge repositories typically rely on their users or dedicated editors to evaluate the reliability of their contents. These explicit feedback mechanisms can be viewed as noisy measurements of both information reliability and information source trustworthiness. Can we leverage these noisy measurements, often biased, to distill a robust, unbiased and interpretable measure of both notions? In this paper, we argue that the large volume of digital traces left by the users within knowledge repositories also reflect information reliability and source trustworthiness. In particular, we propose a temporal point process modeling framework which links the temporal behavior of the users to information reliability and source trustworthiness. Furthermore, we develop an efficient convex optimization procedure to learn the parameters of the model from historical traces of the evaluations provided by these users. Experiments on real-world data gathered from Wikipedia and Stack Overflow show that our modeling framework accurately predicts evaluation events, provides an interpretable measure of information reliability and source trustworthiness, and yields interesting insights about real-world events.", "Abstract This article discusses several classes of stochastic models for the origin times and magnitudes of earthquakes. The models are compared for a Japanese data set for the years 1885–1980 using likelihood methods. For the best model, a change of time scale is made to investigate the deviation of the data from the model. Conventional graphical methods associated with stationary Poisson processes can be used with the transformed time scale. For point processes, effective use of such residual analysis makes it possible to find features of the data set that are not captured in the model. Based on such analyses, the utility of seismic quiescence for the prediction of a major earthquake is investigated.", "Poisson factorization is a probabilistic model of users and items for recommendation systems, where the so-called implicit consumer data is modeled by a factorized Poisson distribution. There are many variants of Poisson factorization methods who show state-of-the-art performance on real-world recommendation tasks. However, most of them do not explicitly take into account the temporal behavior and the recurrent activities of users which is essential to recommend the right item to the right user at the right time. In this paper, we introduce Recurrent Poisson Factorization (RPF) framework that generalizes the classical PF methods by utilizing a Poisson process for modeling the implicit feedback. RPF treats time as a natural constituent of the model and brings to the table a rich family of time-sensitive factorization models. To elaborate, we instantiate several variants of RPF who are capable of handling dynamic user preferences and item specification (DRPF), modeling the social-aspect of product adoption (SRPF), and capturing the consumption heterogeneity among users and items (HRPF). We also develop a variational algorithm for approximate posterior inference that scales up to massive data sets. Furthermore, we demonstrate RPF's superior performance over many state-of-the-art methods on synthetic dataset, and large scale real-world datasets on music streaming logs, and user-item interactions in M-Commerce platforms.", "Information diffusion in online social networks is affected by the underlying network topology, but it also has the power to change it. Online users are constantly creating new links when exposed to new information sources, and in turn these links are alternating the way information spreads. However, these two highly intertwined stochastic processes, information diffusion and network evolution, have been predominantly studied separately, ignoring their co-evolutionary dynamics. We propose a temporal point process model, COEVOLVE, for such joint dynamics, allowing the intensity of one process to be modulated by that of the other. This model allows us to efficiently simulate interleaved diffusion and network events, and generate traces obeying common diffusion and network patterns observed in real-world networks. Furthermore, we also develop a convex optimization framework to learn the parameters of the model from historical diffusion and network evolution traces. We experimented with both synthetic data and data gathered from Twitter, and show that our model provides a good fit to the data as well as more accurate predictions than alternatives.", "SUMMARY In recent years methods of data analysis for point processes have received some attention, for example, by Cox & Lewis (1966) and Lewis (1964). In particular Bartlett (1963a,b) has introduced methods of analysis based on the point spectrum. Theoretical models are relatively sparse. In this paper the theoretical properties of a class of processes with particular reference to the point spectrum or corresponding covariance density functions are discussed. A particular result is a self-exciting process with the same second-order properties as a certain doubly stochastic process. These are not distinguishable by methods of data analysis based on these properties.", "The real social network and associated communities are often hidden under the declared friend or group lists in social networks. We usually observe the manifestation of these hidden networks and communities in the form of recurrent and time-stamped individuals’ activities in the social network. Inferring the underlying network and finding coherent communities are therefore two key challenges in social networks analysis. In this paper, we address the following question: Could we simultaneously detect community structure and network infectivity among individuals from their activities? Based on the fact that the two characteristics intertwine and that knowing one will help better revealing the other, we propose a multidimensional Hawkes process that can address them simultaneously. To this end, we parametrize the network infectivity in terms of individuals’ participation in communities and the popularity of each individual. We show that this modeling approach has many benefits, both conceptually and experimentally. We utilize Bayesian variational inference to design NetCodec, an efficient inference algorithm which is verified with both synthetic and real world data sets. The experiments show that NetCodec can discover the underlying network infectivity and community structure more accurately than baseline method.", "" ] }
1703.08524
2603454828
A variety of real-world processes (over networks) produce sequences of data whose complex temporal dynamics need to be studied. More especially, the event timestamps can carry important information about the underlying network dynamics, which otherwise are not available from the time-series evenly sampled from continuous signals. Moreover, in most complex processes, event sequences and evenly-sampled times series data can interact with each other, which renders joint modeling of those two sources of data necessary. To tackle the above problems, in this paper, we utilize the rich framework of (temporal) point processes to model event data and timely update its intensity function by the synergic twin Recurrent Neural Networks (RNNs). In the proposed architecture, the intensity function is synergistically modulated by one RNN with asynchronous events as input and another RNN with time series as input. Furthermore, to enhance the interpretability of the model, the attention mechanism for the neural point process is introduced. The whole model with event type and timestamp prediction output layers can be trained end-to-end and allows a black-box treatment for modeling the intensity. We substantiate the superiority of our model in synthetic data and three real-world benchmark datasets.
4) @cite_19 : it can be regarded as a generalization of the Hawkes process by adding a self-inhibiting term to account for the inhibiting effects from history events.
{ "cite_N": [ "@cite_19" ], "mid": [ "2090320383" ], "abstract": [ "Massachusetts Institute of Technology and the University of Washington Reactive point processes (RPPs) are a new statistical model designed for predicting discrete events in time, based on past history. RPPs were developed to handle an important problem within the domain of electrical grid reliability: short term prediction of electrical grid failures (“manhole events”), including outages, fires, explosions, and smoking manholes, which can cause threats to public safety and reliability of electrical service in cities. RPPs incorporate self-exciting, self-regulating, and saturating components. The self-excitement occurs as a result of a past event, which causes a temporary rise in vulnerability to future events. The self-regulation occurs as a result of an external inspection which temporarily lowers vulnerability to future events. RPPs can saturate when too many events or inspections occur close together, which ensures that the probability of an event stays within a realistic range. Two of the operational challenges for power companies are i) making continuous-time failure predictions, and ii) cost benefit analysis for decision making and proactive maintenance. RPPs are naturally suited for handling both of these challenges. We use the model to predict power-grid failures in Manhattan over a short term horizon, and use to provide a cost benefit analysis of different proactive maintenance programs." ] }
1703.08428
2591734415
Although we may complain about meetings, they are an essential part of an information worker's work life. Consequently, busy people spend a significant amount of time scheduling meetings. We present Calendar.help, a system that provides fast, efficient scheduling through structured workflows. Users interact with the system via email, delegating their scheduling needs to the system as if it were a human personal assistant. Common scheduling scenarios are broken down using well-defined workflows and completed as a series of microtasks that are automated when possible and executed by a human otherwise. Unusual scenarios fall back to a trained human assistant executing an unstructured macrotask. We describe the iterative approach we used to develop Calendar.help, and share the lessons learned from scheduling thousands of meetings during a year of real-world deployments. Our findings provide insight into how complex information tasks can be broken down into repeatable components that can be executed efficiently to improve productivity.
Scheduling meetings is a difficult task that requires communication, coordination, and negotiation among multiple parties @cite_19 , where each person may have conflicting availabilities, and each may use different, non-interoperable tools @cite_26 . The negotiation is itself complex, as meeting times and locations are resources often governed by intricate constraints and external dependencies @cite_0 . Furthermore, the parties may be geographically dispersed, introducing additional constraints due to time zone differences and the need for communication technologies for remote meetings @cite_33 . This negotiation often needs to occur asynchronously, sometimes requiring several days for the parties to reach consensus @cite_9 @cite_30 . Once a meeting is scheduled, it needs continuous maintenance, as new events often prompt meeting updates and re-schedules.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_33", "@cite_9", "@cite_0", "@cite_19" ], "mid": [ "", "2036256774", "2103787992", "2034775209", "1486300940", "2111971407" ], "abstract": [ "", "This paper reports from a field study of a hospital ward and discusses how people achieve coordination through the use of a wide range of interrelated non-digital artifacts, like whiteboards, work schedules, examination sheets, care records, post-it notes etc. These artifacts have multiple roles and functions which in combination facilitate location awareness, continuous coordination, cooperative planning and status overview. We described how actors achieve coordination by using different aspects of these artifacts: their material qualities, the structure they provide as templates and the signs inscribed upon them that are only meaningful to knowledgeable actors. We finally discuss the implication for the design of CSCW tools from the study.", "We conducted interviews with sixteen members of teams that worked across global time zone differences. Despite time zone differences of about eight hours, collaborators still found time to synchronously meet. The interviews identified the diverse strategies teams used to find time windows to interact, which often included times outside of the normal workday and connecting from home to participate. Recent trends in increased work connectivity from home and blurred boundaries between work and home enabled more scheduling flexibility. While email use was understandably prevalent, there was also general interest in video, although obstacles remain for widespread usage. We propose several design implications for supporting this growing population of workers that need to span global time zone differences.", "Office automation is used by groups of people with complex communication needs to help them reach business goals such as scheduling, tracking, reviewing, and delegating. Effective individual and group decisions are heavily dependent on communication protocols and social conventions. Because these conventions are so ingrained, they are sometimes not readily available to conscious inspection during the design of communication systems. Even more problematic, system designers may not have first hand knowledge of the conventions and protocol for the range of environments in which their systems will be used. Nevertheless, office systems must work in tandem with these conventions. Wang Laboratories has a continuing program of research directed at identifying the psychological and social factors that come into play during the adoption and use of computer communication systems and the implications of these factors for the design of those systems. Highlights of a three year program of research are presented covering implications for voice mail, electronic mail, and electronic calendars.", "Automating routine organizational tasks, such as meeting scheduling, requires a careful balance between the individual (respecting his or her privacy and personal preferences) and the organization (making efficient use of time and other resources). We argue that meeting scheduling is an inherently distributed process, and that negotiating over meetings can be viewed as a distributed search process. Keeping the process tractable requires introducing heuristics to guide distributed schedulers' decisions about what information to exchange and whether or not to propose the same tentative time for several meetings. While we have intuitions about how such heuristics could affect scheduling performance and efficiency, verifying these intuitions requires a more formal model of the meeting schedule problem and process. We present our preliminary work toward this goal, as well as experimental results that validate some of the predictions of our formal model. We also investigate scheduling in overconstrained situations, namely, scheduling of high priority meetings at short notice, which requires cancellation and rescheduling of previously scheduled meetings. Our model provides a springboard into deeper investigations of important issues in distributed artificial intelligence as well, and we outline our ongoing work in this direction.", "Many systems, applications, and features that support cooperative work share two characteristics: A significant investment has been made in their development, and their successes have consistently fallen far short of expectations. Examination of several application areas reveals a common dynamic: 1) A factor contributing to the application’s failure is the disparity between those who will benefit from an application and those who must do additional work to support it. 2) A factor contributing to the decision-making failure that leads to ill-fated development efforts is the unique lack of management intuition for CSCW applications. 3) A factor contributing to the failure to learn from experience is the extreme difficulty of evaluating these applications. These three problem areas escape adequate notice due to two natural but ultimately misleading analogies: the analogy between multi-user application programs and multi-user computer systems, and the analogy between multi-user applications and single-user applications. These analogies influence the way we think about cooperative work applications and designers and decision-makers fail to recognize their limits. Several CSCW application areas are examined in some detail. Introduction. An illustrative example: automatic meeting" ] }
1703.08428
2591734415
Although we may complain about meetings, they are an essential part of an information worker's work life. Consequently, busy people spend a significant amount of time scheduling meetings. We present Calendar.help, a system that provides fast, efficient scheduling through structured workflows. Users interact with the system via email, delegating their scheduling needs to the system as if it were a human personal assistant. Common scheduling scenarios are broken down using well-defined workflows and completed as a series of microtasks that are automated when possible and executed by a human otherwise. Unusual scenarios fall back to a trained human assistant executing an unstructured macrotask. We describe the iterative approach we used to develop Calendar.help, and share the lessons learned from scheduling thousands of meetings during a year of real-world deployments. Our findings provide insight into how complex information tasks can be broken down into repeatable components that can be executed efficiently to improve productivity.
Microtask workflows are being increasingly used to accomplish complex, multi-step tasks @cite_6 , such as taxonomy-creation @cite_36 , itinerary-planning @cite_21 , writing @cite_25 , and even real-time conversations @cite_3 . Workflows have even been used to assemble flash teams of expert workers of different specialties @cite_14 . Prior work has demonstrated several advantages to breaking a monolithic task into microtasks, including making tasks easier for workers to complete @cite_34 , producing higher quality outcomes @cite_37 , reducing coordination and collaboration overheads @cite_25 , and facilitating partial automation of a larger task @cite_1 . Emerging services such as Facebook M @cite_31 , X.ai and Clara Labs @cite_22 @cite_8 use a combination of automation and human labor while providing users with a seamless experience. However, very little has been shared about how these systems actually work and are used.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_22", "@cite_8", "@cite_36", "@cite_21", "@cite_1", "@cite_3", "@cite_6", "@cite_31", "@cite_34", "@cite_25" ], "mid": [ "2124793952", "2028953510", "", "", "2120396827", "2166145477", "2109021302", "2163986367", "2146286563", "", "2561099668", "2404427926" ], "abstract": [ "A large, seemingly overwhelming task can sometimes be transformed into a set of smaller, more manageable microtasks that can each be accomplished independently. For example, it may be hard to subjectively rank a large set of photographs, but easy to sort them in spare moments by making many pairwise comparisons. In crowdsourcing systems, microtasking enables unskilled workers with limited commitment to work together to complete tasks they would not be able to do individually. We explore the costs and benefits of decomposing macrotasks into microtasks for three task categories: arithmetic, sorting, and transcription. We find that breaking these tasks into microtasks results in longer overall task completion times, but higher quality outcomes and a better experience that may be more resilient to interruptions. These results suggest that microtasks can help people complete high quality work in interruption-driven environments.", "We introduce flash teams, a framework for dynamically assembling and managing paid experts from the crowd. Flash teams advance a vision of expert crowd work that accomplishes complex, interdependent goals such as engineering and design. These teams consist of sequences of linked modular tasks and handoffs that can be computationally managed. Interactive systems reason about and manipulate these teams' structures: for example, flash teams can be recombined to form larger organizations and authored automatically in response to a user's request. Flash teams can also hire more people elastically in reaction to task needs, and pipeline intermediate output to accelerate completion times. To enable flash teams, we present Foundry, an end-user authoring platform and runtime manager. Foundry allows users to author modular tasks, then manages teams through handoffs of intermediate work. We demonstrate that Foundry and flash teams enable crowdsourcing of a broad class of goals including design prototyping, course development, and film animation, in half the work time of traditional self-managed teams.", "", "", "Taxonomies are a useful and ubiquitous way of organizing information. However, creating organizational hierarchies is difficult because the process requires a global understanding of the objects to be categorized. Usually one is created by an individual or a small group of people working together for hours or even days. Unfortunately, this centralized approach does not work well for the large, quickly changing datasets found on the web. Cascade is an automated workflow that allows crowd workers to spend as little at 20 seconds each while collectively making a taxonomy. We evaluate Cascade and show that on three datasets its quality is 80-90 of that of experts. Cascade has a competitive cost to expert information architects, despite taking six times more human labor. Fortunately, this labor can be parallelized such that Cascade will run in as fast as four minutes instead of hours or days.", "An important class of tasks that are underexplored in current human computation systems are complex tasks with global constraints. One example of such a task is itinerary planning, where solutions consist of a sequence of activities that meet requirements specified by the requester. In this paper, we focus on the crowdsourcing of such plans as a case study of constraint-based human computation tasks and introduce a collaborative planning system called Mobi that illustrates a novel crowdware paradigm. Mobi presents a single interface that enables crowd participants to view the current solution context and make appropriate contributions based on current needs. We conduct experiments that explain how Mobi enables a crowd to effectively and collaboratively resolve global constraints, and discuss how the design principles behind Mobi can more generally facilitate a crowd to tackle problems involving global constraints.", "We show how machine learning and inference can be harnessed to leverage the complementary strengths of humans and computational agents to solve crowdsourcing tasks. We construct a set of Bayesian predictive models from data and describe how the models operate within an overall crowd-sourcing architecture that combines the efforts of people and machine vision on the task of classifying celestial bodies defined within a citizens' science project named Galaxy Zoo. We show how learned probabilistic models can be used to fuse human and machine contributions and to predict the behaviors of workers. We employ multiple inferences in concert to guide decisions on hiring and routing workers to tasks so as to maximize the efficiency of large-scale crowdsourcing processes based on expected utility.", "Despite decades of research attempting to establish conversational interaction between humans and computers, the capabilities of automated conversational systems are still limited. In this paper, we introduce Chorus, a crowd-powered conversational assistant. When using Chorus, end users converse continuously with what appears to be a single conversational partner. Behind the scenes, Chorus leverages multiple crowd workers to propose and vote on responses. A shared memory space helps the dynamic crowd workforce maintain consistency, and a game-theoretic incentive mechanism helps to balance their efforts between proposing and voting. Studies with 12 end users and 100 crowd workers demonstrate that Chorus can provide accurate, topical responses, answering nearly 93 of user queries appropriately, and staying on-topic in over 95 of responses. We also observed that Chorus has advantages over pairing an end user with a single crowd worker and end users completing their own tasks in terms of speed, quality, and breadth of assistance. Chorus demonstrates a new future in which conversational assistants are made usable in the real world by combining human and machine intelligence, and may enable a useful new way of interacting with the crowds powering other systems.", "Paid crowd work offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale. But it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework. Can we foresee a future crowd workplace in which we would want our children to participate? This paper frames the major challenges that stand in the way of this goal. Drawing on theory from organizational behavior and distributed computing, as well as direct feedback from workers, we outline a framework that will enable crowd work that is complex, collaborative, and sustainable. The framework lays out research challenges in twelve major areas: workflow, task assignment, hierarchy, real-time response, synchronous collaboration, quality control, crowds guiding AIs, AIs guiding crowds, platforms, job design, reputation, and motivation.", "", "What happens when we algorithmically break complex productivity tasks down into microtasks? At Microsoft Research, the author and her team are accelerating a shift toward microproductivity to make it easy for people to get big things done one small step at a time.", "This paper presents the MicroWriter, a system that decomposes the task of writing into three types of microtasks to produce a single report: 1) generating ideas, 2) labeling ideas to organize them, and 3) writing paragraphs given a few related ideas. Because each microtask can be completed individually with limited awareness of what has been already done and what others are doing, this decomposition can change the experience of collaborative writing. Prior work has used microtasking to support collaborative writing with unaffiliated crowd workers. To instead study its impact on collaboration among writers with context and investment in the writing project, we asked six groups of co-workers (or 19 people in total) to use the MicroWriter in a synchronous, collocated setting to write a report about a shared work goal. Our observations suggest ways that recent advances in microtasking and crowd work can be used to support collaborative writing within preexisting groups." ] }
1703.08428
2591734415
Although we may complain about meetings, they are an essential part of an information worker's work life. Consequently, busy people spend a significant amount of time scheduling meetings. We present Calendar.help, a system that provides fast, efficient scheduling through structured workflows. Users interact with the system via email, delegating their scheduling needs to the system as if it were a human personal assistant. Common scheduling scenarios are broken down using well-defined workflows and completed as a series of microtasks that are automated when possible and executed by a human otherwise. Unusual scenarios fall back to a trained human assistant executing an unstructured macrotask. We describe the iterative approach we used to develop Calendar.help, and share the lessons learned from scheduling thousands of meetings during a year of real-world deployments. Our findings provide insight into how complex information tasks can be broken down into repeatable components that can be executed efficiently to improve productivity.
Bardram and Bossen argue that it is difficult for a single information system to support the web of coordinative artifacts' that people employ to schedule their meetings @cite_26 . Our goal with Calendar.help is not to obviate this network of interdependent scheduling tools and processes, but rather to introduce a flexible and adaptive virtual assistant that can navigate it on a person's behalf using a conversational approach over email. We accomplish this with a novel architecture that seamlessly combines automation, structured microtask workflows, and unstructured macrotasks.
{ "cite_N": [ "@cite_26" ], "mid": [ "2036256774" ], "abstract": [ "This paper reports from a field study of a hospital ward and discusses how people achieve coordination through the use of a wide range of interrelated non-digital artifacts, like whiteboards, work schedules, examination sheets, care records, post-it notes etc. These artifacts have multiple roles and functions which in combination facilitate location awareness, continuous coordination, cooperative planning and status overview. We described how actors achieve coordination by using different aspects of these artifacts: their material qualities, the structure they provide as templates and the signs inscribed upon them that are only meaningful to knowledgeable actors. We finally discuss the implication for the design of CSCW tools from the study." ] }
1703.08590
2603442972
Attributed graphs model real networks by enriching their nodes with attributes accounting for properties. Several techniques have been proposed for partitioning these graphs into clusters that are homogeneous with respect to both semantic attributes and to the structure of the graph. However, time and space complexities of state of the art algorithms limit their scalability to medium-sized graphs. We propose SToC (for Semantic-Topological Clustering), a fast and scalable algorithm for partitioning large attributed graphs. The approach is robust, being compatible both with categorical and with quantitative attributes, and it is tailorable, allowing the user to weight the semantic and topological components. Further, the approach does not require the user to guess in advance the number of clusters. SToC relies on well known approximation techniques such as bottom-k sketches, traditional graph-theoretic concepts, and a new perspective on the composition of heterogeneous distance measures. Experimental results demonstrate its ability to efficiently compute high-quality partitions of large scale attributed graphs.
Overall, state-of-the-art approaches to partition attributed graphs are affected by several limitations, the first of which is efficiency. Although, the algorithm in @cite_36 does not provide exact bounds, our analysis assessed an @math time and space complexity, which restricts its usability to networks with thousands of nodes. The algorithm in @cite_21 aims at overcoming these performance issues, and does actually run faster in practice. However, as we show in , its time and space performances heavily rely on assuming a small number of clusters. Second, similarity between elements is usually defined with exact matches on categorical attributes, so that similarity among quantitative attributes is not preserved. Further, data-structures are not maintainable, so that after a change in the input graph they will have to be fully recomputed. Finally, most of the approaches require as input the number of clusters that have to be generated @cite_12 @cite_10 @cite_36 @cite_21 . In many applications it is unclear how to choose this value or how to evaluate the correctness of the choice, so that the user is often forced to repeatedly launching the algorithm with tentative values.
{ "cite_N": [ "@cite_36", "@cite_21", "@cite_10", "@cite_12" ], "mid": [ "2135827982", "2044400205", "2132914434", "2161160262" ], "abstract": [ "In recent years, many networks have become available for analysis, including social networks, sensor networks, biological networks, etc. Graph clustering has shown its effectiveness in analyzing and visualizing large networks. The goal of graph clustering is to partition vertices in a large graph into clusters based on various criteria such as vertex connectivity or neighborhood similarity. Many existing graph clustering methods mainly focus on the topological structures, but largely ignore the vertex properties which are often heterogeneous. Recently, a new graph clustering algorithm, SA-Cluster, has been proposed which combines structural and attribute similarities through a unified distance measure. SA-Cluster performs matrix multiplication to calculate the random walk distances between graph vertices. As the edge weights are iteratively adjusted to balance the importance between structural and attribute similarities, matrix multiplication is repeated in each iteration of the clustering process to recalculate the random walk distances which are affected by the edge weight update. In order to improve the efficiency and scalability of SA-Cluster, in this paper, we propose an efficient algorithm Inc-Cluster to incrementally update the random walk distances given the edge weight increments. Complexity analysis is provided to estimate how much runtime cost Inc-Cluster can save. Experimental results demonstrate that Inc-Cluster achieves significant speedup over SA-Cluster on large graphs, while achieving exactly the same clustering quality in terms of intra-cluster structural cohesiveness and attribute value homogeneity.", "Graph clustering, also known as community detection, is a long-standing problem in data mining. In recent years, with the proliferation of rich attribute information available for objects in real-world graphs, how to leverage not only structural but also attribute information for clustering attributed graphs becomes a new challenge. Most existing works took a distance-based approach. They proposed various distance measures to fuse structural and attribute information and then applied standard techniques for graph clustering based on these distance measures. In this article, we take an alternative view and propose a novel Bayesian framework for attributed graph clustering. Our framework provides a general and principled solution to modeling both the structural and the attribute aspects of a graph. It avoids the artificial design of a distance measure in existing methods and, furthermore, can seamlessly handle graphs with different types of edges and vertex attributes. We develop an efficient variational method for graph clustering under this framework and derive two concrete algorithms for clustering unweighted and weighted attributed graphs. Experimental results on large real-world datasets show that our algorithms significantly outperform the state-of-the-art distance-based method, in terms of both effectiveness and efficiency.", "In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.", "In k-means clustering, we are given a set of n data points in d-dimensional space R sup d and an integer k and the problem is to determine a set of k points in Rd, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's (1982) algorithm. We present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation." ] }
1703.08581
2950613790
We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.
Early work on (ST) @cite_21 -- translating audio in one language into text in another -- used lattices from an ASR system as inputs to translation models @cite_29 @cite_20 , giving the translation model access to the speech recognition uncertainty. Alternative approaches explicitly integrated acoustic and translation models using a stochastic finite-state transducer which can decode the translated text directly using Viterbi search @cite_24 @cite_13 .
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_24", "@cite_13", "@cite_20" ], "mid": [ "2113106066", "2136530135", "2139647714", "", "1537859740" ], "abstract": [ "In speech translation, we are faced with the problem of how to couple the speech recognition process and the translation process. Starting from the Bayes decision rule for speech translation, we analyze how the interaction between the recognition process and the translation process can be modelled. In the light of this decision rule, we discuss the already existing approaches to speech translation. None of the existing approaches seems to have addressed this direct interaction. We suggest two new methods, the local averaging approximation and the monotone alignments.", "Spoken language translation (SLT) is of great relevance in our increasingly globalized world, both from a social and economic point of view. It is one of the major challenges in automatic speech recognition (ASR) and machine translation (MT), driving an intense research activity in these areas. Speech translation is useful to assist person-to-person communication in limited domains like tourism and traveling and to translate foreign parliamentary speeches and broadcast news. Speech translation is based on a suitable combination of two independent technologies, namely ASR and MT of written language. Thus, the important question is how to pass on the ASR ambiguities to the MT process. A unifying framework for this ASR-MT interface is provided by applying the Bayes decision rule to the speech translation tasks as whole rather than to each task individually. Depending on the MT approaches used, such as finite-state transducers or phrase-based modeling, various types of ASR-MT interfaces have been studied, ranging from N-best lists through word lattices to confusion networks. We have discussed experimental results on various tasks, ranging from limited to unrestricted domains. Despite the significant advances and the large number of experimental studies, it is still an open question what type of interface provides a suitable compromise between translation accuracy and computational cost.", "A fully integrated approach to speech input language translation in limited domain applications is presented. The mapping from the input to the output language is modeled in terms of a finite state translation model which is learned from examples of input output sentences of the task considered. This model is tightly integrated with standard acoustic phonetic models of the input language and the resulting global model directly supplies, through Viterbi search, an optimal output language sentence for each input language utterance. Several extensions to this framework, recently developed to cope with the increasing difficulty of translation tasks, are reviewed. Finally, results for a task in the framework of hotel front desk communication, with a vocabulary of about 700 words, are reported.", "", "This paper focuses on the interface between speech recognition and machine translation in a speech translation system. Based on a thorough theoretical framework, we exploit word lattices of automatic speech recognition hypotheses as input to our translation system which is based on weighted finite-state transducers. We show that acoustic recognition scores of the recognized words in the lattices positively and significantly affect the translation quality. In experiments, we have found consistent improvements on three different corpora compared with translations of single best recognized results. In addition we build and evaluate a fully integrated speech translation model." ] }
1703.08581
2950613790
We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.
In this paper we compare our integrated model to results obtained from cascaded models on a Spanish to English speech translation task @cite_26 @cite_2 @cite_34 . These approaches also use ASR lattices as MT inputs. Post al @cite_26 used a GMM-HMM ASR system. Kumar al @cite_2 later showed that using a better ASR model improved overall ST results. Subsequently @cite_34 showed that modeling features at the boundary of the ASR and the MT system can further improve performance. We carry this notion much further by defining an end-to-end model for the entire task.
{ "cite_N": [ "@cite_26", "@cite_34", "@cite_2" ], "mid": [ "", "2251313925", "1970987322" ], "abstract": [ "", "Speech translation is conventionally carried out by cascading an automatic speech recognition (ASR) and a statistical machine translation (SMT) system. The hypotheses chosen for translation are based on the ASR system’s acoustic and language model scores, and typically optimized for word error rate, ignoring the intended downstream use: automatic translation. In this paper, we present a coarseto-fine model that uses features from the ASR and SMT systems to optimize this coupling. We demonstrate that several standard features utilized by ASR and SMT systems can be used in such a model at the speech-translation interface, and we provide empirical results on the Fisher Spanish-English speech translation corpus.", "We report insights from translating Spanish conversational telephone speech into English text by cascading an automatic speech recognition (ASR) system with a statistical machine translation (SMT) system. The key new insight is that the informal register of conversational speech is a greater challenge for ASR than for SMT: the BLEU score for translating the reference transcript is 64 , but drops to 32 for translating automatic transcripts, whose word error rate (WER) is 40 . Several strategies are examined to mitigate the impact of ASR errors on the SMT output: (i) providing the ASR lattice, instead of the 1-best output, as input to the SMT system, (ii) training the SMT system on Spanish ASR output paired with English text, instead of Spanish reference transcripts, and (iii) improving the core ASR system. Each leads to consistent and complementary improvements in the SMT output. Compared to translating the 1-best output of an ASR system with 40 WER using an SMT system trained on Spanish reference transcripts, translating the output lattice of a better ASR system with 35 WER using an SMT system trained on ASR output improves BLEU from 32 to 38 ." ] }
1703.08581
2950613790
We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.
Other recent work on speech translation does not use ASR. Instead @cite_3 used an unsupervised model to cluster repeated audio patterns which are used to train a bag of words translation model. In @cite_5 seq2seq models were used to align speech with translated text, but not to directly predict the translations. Our work is most similar to @cite_15 which uses a LAS-like model for ST on data synthesized using a text-to-speech system. In contrast, we train on a much larger corpus composed of real speech.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_3" ], "mid": [ "2466918907", "2949328740", "2952465655" ], "abstract": [ "", "This paper proposes a first attempt to build an end-to-end speech-to-text translation system, which does not use source language transcription during learning or decoding. We propose a model for direct speech-to-text translation, which gives promising results on a small French-English synthetic corpus. Relaxing the need for source language transcription would drastically change the data collection methodology in speech translation, especially in under-resourced scenarios. For instance, in the former project DARPA TRANSTAC (speech translation from spoken Arabic dialects), a large effort was devoted to the collection of speech transcripts (and a prerequisite to obtain transcripts was often a detailed transcription guide for languages with little standardized spelling). Now, if end-to-end approaches for speech-to-text translation are successful, one might consider collecting data by asking bilingual speakers to directly utter speech in the source language from target language text utterances. Such an approach has the advantage to be applicable to any unwritten (source) language.", "We explore the problem of translating speech to text in low-resource scenarios where neither automatic speech recognition (ASR) nor machine translation (MT) are available, but we have training data in the form of audio paired with text translations. We present the first system for this problem applied to a realistic multi-speaker dataset, the CALLHOME Spanish-English speech translation corpus. Our approach uses unsupervised term discovery (UTD) to cluster repeated patterns in the audio, creating a pseudotext, which we pair with translations to create a parallel text and train a simple bag-of-words MT model. We identify the challenges faced by the system, finding that the difficulty of cross-speaker UTD results in low recall, but that our system is still able to correctly translate some content words in test data." ] }
1703.08359
2950509861
Most existing person re-identification algorithms either extract robust visual features or learn discriminative metrics for person images. However, the underlying manifold which those images reside on is rarely investigated. That raises a problem that the learned metric is not smooth with respect to the local geometry structure of the data manifold. In this paper, we study person re-identification with manifold-based affinity learning, which did not receive enough attention from this area. An unconventional manifold-preserving algorithm is proposed, which can 1) make the best use of supervision from training data, whose label information is given as pairwise constraints; 2) scale up to large repositories with low on-line time complexity; and 3) be plunged into most existing algorithms, serving as a generic postprocessing procedure to further boost the identification accuracies. Extensive experimental results on five popular person re-identification benchmarks consistently demonstrate the effectiveness of our method. Especially, on the largest CUHK03 and Market-1501, our method outperforms the state-of-the-art alternatives by a large margin with high efficiency, which is more appropriate for practical applications.
The manifold structure has been observed by several works. Motivated by the fact that pedestrian data are distributed on a highly curved manifold, a sampling strategy for training neural network called Moderate Positive Mining (MPM) is proposed in @cite_39 . However, considering the data distribution is hard to define, MPM does aim at estimating the geodesic distances along the manifold. From this point of view, SSM explicitly learns the geodesic distances between instances, which can be directly used for re-identification.
{ "cite_N": [ "@cite_39" ], "mid": [ "2519373641" ], "abstract": [ "Person re-identification is challenging due to the large variations of pose, illumination, occlusion and camera view. Owing to these variations, the pedestrian data is distributed as highly-curved manifolds in the feature space, despite the current convolutional neural networks (CNN)’s capability of feature extraction. However, the distribution is unknown, so it is difficult to use the geodesic distance when comparing two samples. In practice, the current deep embedding methods use the Euclidean distance for the training and test. On the other hand, the manifold learning methods suggest to use the Euclidean distance in the local range, combining with the graphical relationship between samples, for approximating the geodesic distance. From this point of view, selecting suitable positive (i.e. intra-class) training samples within a local range is critical for training the CNN embedding, especially when the data has large intra-class variations. In this paper, we propose a novel moderate positive sample mining method to train robust CNN for person re-identification, dealing with the problem of large variation. In addition, we improve the learning by a metric weight constraint, so that the learned metric has a better generalization ability. Experiments show that these two strategies are effective in learning robust deep metrics for person re-identification, and accordingly our deep model significantly outperforms the state-of-the-art methods on several benchmarks of person re-identification. Therefore, the study presented in this paper may be useful in inspiring new designs of deep models for person re-identification." ] }
1703.08359
2950509861
Most existing person re-identification algorithms either extract robust visual features or learn discriminative metrics for person images. However, the underlying manifold which those images reside on is rarely investigated. That raises a problem that the learned metric is not smooth with respect to the local geometry structure of the data manifold. In this paper, we study person re-identification with manifold-based affinity learning, which did not receive enough attention from this area. An unconventional manifold-preserving algorithm is proposed, which can 1) make the best use of supervision from training data, whose label information is given as pairwise constraints; 2) scale up to large repositories with low on-line time complexity; and 3) be plunged into most existing algorithms, serving as a generic postprocessing procedure to further boost the identification accuracies. Extensive experimental results on five popular person re-identification benchmarks consistently demonstrate the effectiveness of our method. Especially, on the largest CUHK03 and Market-1501, our method outperforms the state-of-the-art alternatives by a large margin with high efficiency, which is more appropriate for practical applications.
At the first glance, affinity learning in our work appears the same as similarity learning ( , PolyMap @cite_57 ). Unlike similarity learning on polynomial feature map @cite_19 which connects to Mahalanobis distance metric and bilinear similarity, affinity learning in SSM does not rely on the definition of metric (non-metric can be also used). Therefore, they are inherently different. Finally, it is acknowledged that those metric learning methods ( , KISSME @cite_6 , XQDA @cite_63 ) are also relevant, but take effects prior to SSM in a person re-identification system as Fig. shows.
{ "cite_N": [ "@cite_57", "@cite_19", "@cite_63", "@cite_6" ], "mid": [ "1927348918", "2475284720", "1949591461", "2068042582" ], "abstract": [ "In this paper, we address the person re-identification problem, discovering the correct matches for a probe person image from a set of gallery person images. We follow the learning-to-rank methodology and learn a similarity function to maximize the difference between the similarity scores of matched and unmatched images for a same person. We introduce at least three contributions to person re-identification. First, we present an explicit polynomial kernel feature map, which is capable of characterizing the similarity information of all pairs of patches between two images, called soft-patch-matching, instead of greedily keeping only the best matched patch, and thus more robust. Second, we introduce a mixture of linear similarity functions that is able to discover different soft-patch-matching patterns. Last, we introduce a negative semi-definite regularization over a subset of the weights in the similarity function, which is motivated by the connection between explicit polynomial kernel feature map and the Mahalanobis distance, as well as the sparsity constraint over the parameters to avoid over-fitting. Experimental results over three public benchmarks demonstrate the superiority of our approach.", "Pose variation remains one of the major factors that adversely affect the accuracy of person re-identification. Such variation is not arbitrary as body parts (e.g. head, torso, legs) have relative stable spatial distribution. Breaking down the variability of global appearance regarding the spatial distribution potentially benefits the person matching. We therefore learn a novel similarity function, which consists of multiple sub-similarity measurements with each taking in charge of a subregion. In particular, we take advantage of the recently proposed polynomial feature map to describe the matching within each subregion, and inject all the feature maps into a unified framework. The framework not only outputs similarity measurements for different regions, but also makes a better consistency among them. Our framework can collaborate local similarities as well as global similarity to exploit their complementary strength. It is flexible to incorporate multiple visual cues to further elevate the performance. In experiments, we analyze the effectiveness of the major components. The results on four datasets show significant and consistent improvements over the state-of-the-art methods.", "Person re-identification is an important technique towards automatic search of a person's presence in a surveillance video. Two fundamental problems are critical for person re-identification, feature representation and metric learning. An effective feature representation should be robust to illumination and viewpoint changes, and a discriminant metric should be learned to match various person images. In this paper, we propose an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA). The LOMO feature analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. Besides, to handle illumination variations, we apply the Retinex transform and a scale invariant texture operator. To learn a discriminant metric, we propose to learn a discriminant low dimensional subspace by cross-view quadratic discriminant analysis, and simultaneously, a QDA metric is learned on the derived subspace. We also present a practical computation method for XQDA, as well as its regularization. Experiments on four challenging person re-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show that the proposed method improves the state-of-the-art rank-1 identification rates by 2.2 , 4.88 , 28.91 , and 31.55 on the four databases, respectively.", "In this paper, we raise important issues on scalability and the required degree of supervision of existing Mahalanobis metric learning methods. Often rather tedious optimization procedures are applied that become computationally intractable on a large scale. Further, if one considers the constantly growing amount of data it is often infeasible to specify fully supervised labels for all data points. Instead, it is easier to specify labels in form of equivalence constraints. We introduce a simple though effective strategy to learn a distance metric from equivalence constraints, based on a statistical inference perspective. In contrast to existing methods we do not rely on complex optimization problems requiring computationally expensive iterations. Hence, our method is orders of magnitudes faster than comparable methods. Results on a variety of challenging benchmarks with rather diverse nature demonstrate the power of our method. These include faces in unconstrained environments, matching before unseen object instances and person re-identification across spatially disjoint cameras. In the latter two benchmarks we clearly outperform the state-of-the-art." ] }
1703.08338
2604818337
This work deviates from easy-to-define class boundaries for object interactions. For the task of object interaction recognition, often captured using an egocentric view, we show that semantic ambiguities in verbs and recognising sub-interactions along with concurrent interactions result in legitimate class overlaps (Figure 1). We thus aim to model the mapping between observations and interaction classes, as well as class overlaps, towards a probabilistic multi-label classifier that emulates human annotators. Given a video segment containing an object interaction, we model the probability for a verb, out of a list of possible verbs, to be used to annotate that interaction. The proba- bility is learnt from crowdsourced annotations, and is tested on two public datasets, comprising 1405 video sequences for which we provide annotations on 90 verbs. We outper- form conventional single-label classification by 11 and 6 on the two datasets respectively, and show that learning from annotation probabilities outperforms majority voting and enables discovery of co-occurring labels.
3pt Action Recognition has largely focussed on a single label classification approach. Hand crafted features dominated most seminal action recognition works ranging from those that have used spatio-temporal interest points @cite_5 @cite_28 @cite_38 @cite_4 with a bag of word representation to trajectory-based methods @cite_17 @cite_24 , encoded using Fisher Vectors @cite_20 . Features were typically classified using one-vs-all SVMs. Within the egocentric domain other features such as gaze @cite_18 , hand @cite_32 @cite_45 or object specific features @cite_15 @cite_16 @cite_31 @cite_48 @cite_12 were also incorporated.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_31", "@cite_4", "@cite_28", "@cite_48", "@cite_32", "@cite_24", "@cite_45", "@cite_5", "@cite_15", "@cite_16", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "", "2212494831", "", "", "", "2033639255", "2293543285", "", "", "2108333036", "1967686239", "", "", "1606858007", "2105101328" ], "abstract": [ "", "We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze.", "", "", "", "Identifying handled objects, i.e. objects being manipulated by a user, is essential for recognizing the person's activities. An egocentric camera as worn on the body enjoys many advantages such as having a natural first-person view and not needing to instrument the environment. It is also a challenging setting, where background clutter is known to be a major source of problems and is difficult to handle with the camera constantly and arbitrarily moving. In this work we develop a bottom-up motion-based approach to robustly segment out foreground objects in egocentric video and show that it greatly improves object recognition accuracy. Our key insight is that egocentric video of object manipulation is a special domain and many domain-specific cues can readily help. We compute dense optical flow and fit it into multiple affine layers. We then use a max-margin classifier to combine motion with empirical knowledge of object location and background movement as well as temporal cues of support region and color appearance. We evaluate our segmentation algorithm on the large Intel Egocentric Object Recognition dataset with 42 objects and 100K frames. We show that, when combined with temporal integration, figure-ground segmentation improves the accuracy of a SIFT-based recognition system from 33 to 60 , and that of a latent-HOG system from 64 to 86 .", "Wearable computing technologies are advancing rapidly and enabling users to easily record daily activities for applications such as life-logging or health monitoring. Recognizing hand and object interactions in these videos will help broaden application domains, but recognizing such interactions automatically remains a difficult task. Activity recognition from the first-person point-of-view is difficult because the video includes constant motion, cluttered backgrounds, and sudden changes of scenery. Recognizing hand-related activities is particularly challenging due to the many temporal and spatial variations induced by hand interactions. We present a novel approach to recognize hand-object interactions by extracting both local motion features representing the subtle movements of the hands and global hand shape features to capture grasp types. We validate our approach on multiple egocentric action datasets and show that state-of-the-art performance can be achieved by considering both local motion and global appearance information.", "", "", "In this paper we introduce a 3-dimensional (3D) SIFT descriptor for video or 3D imagery such as MRI data. We also show how this new descriptor is able to better represent the 3D nature of video data in the application of action recognition. This paper will show how 3D SIFT is able to outperform previously used description methods in an elegant and efficient manner. We use a bag of words approach to represent videos, and present a method to discover relationships between spatio-temporal words in order to better describe the video data.", "In this paper we present a model of action based on the change in the state of the environment. Many actions involve similar dynamics and hand-object relationships, but differ in their purpose and meaning. The key to differentiating these actions is the ability to identify how they change the state of objects and materials in the environment. We propose a weakly supervised method for learning the object and material states that are necessary for recognizing daily actions. Once these state detectors are learned, we can apply them to input videos and pool their outputs to detect actions. We further demonstrate that our method can be used to segment discrete actions from a continuous video of an activity. Our results outperform state-of-the-art action recognition and activity segmentation results.", "", "", "The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9 to 58.3 . Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets.", "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art." ] }
1703.08617
2950973664
Modeling the long-term facial aging process is extremely challenging due to the presence of large and non-linear variations during the face development stages. In order to efficiently address the problem, this work first decomposes the aging process into multiple short-term stages. Then, a novel generative probabilistic model, named Temporal Non-Volume Preserving (TNVP) transformation, is presented to model the facial aging process at each stage. Unlike Generative Adversarial Networks (GANs), which requires an empirical balance threshold, and Restricted Boltzmann Machines (RBM), an intractable model, our proposed TNVP approach guarantees a tractable density function, exact inference and evaluation for embedding the feature transformations between faces in consecutive stages. Our model shows its advantages not only in capturing the non-linear age related variance in each stage but also producing a smooth synthesis in age progression across faces. Our approach can model any face in the wild provided with only four basic landmark points. Moreover, the structure can be transformed into a deep convolutional network while keeping the advantages of probabilistic models with tractable log-likelihood density estimation. Our method is evaluated in both terms of synthesizing age-progressed faces and cross-age face verification and consistently shows the state-of-the-art results in various face aging databases, i.e. FG-NET, MORPH, AginG Faces in the Wild (AGFW), and Cross-Age Celebrity Dataset (CACD). A large-scale face verification on Megaface challenge 1 is also performed to further show the advantages of our proposed approach.
use the age prototypes to synthesize new face images. The average faces of people in the same age group are used as the prototypes @cite_14 . The input image can be transformed into the age-progressed face by adding the differences between the prototypes of two age groups @cite_12 . Recently, Kemelmacher- @cite_1 proposed to construct sharper average prototype faces from a large-scale set of images in combining with subspace alignment and illumination normalization.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_12" ], "mid": [ "2136074653", "", "2085481337" ], "abstract": [ "A technique for defining facial prototypes is described which supports transformations along quantifiable dimensions in \"face space\". Examples illustrate the use of shape and color information to perform predictive gender and age transformations. The processes we describe begin with the creation of a facial prototype. Generally, a prototype can be defined as a representation containing the consistent attributes across a class of objects. Once we obtain a class prototype, we can take an exemplar that has some information missing and augment it with the prototypical information. In effect, this \"adds in\" the average values for the missing information. We use this notion to transform gray-scale images into full color by including the color information from a relevant prototype. It is also possible to deduce the difference between two groups within a class. Separate prototypes can be formed for each group. These can be used subsequently to define a transformation that will map instances from one group onto the domain of the other. This paper details the procedure we use to transform facial images and shows how it can be used to alter perceived facial attributes. >", "", "This study investigated visual cues to age by using facial composites which blend shape and colour information from multiple faces. Baseline measurements showed that perceived age of adult male faces is on average an accurate index of their chronological age over the age range 20-60 years. Composite images were made from multiple images of different faces by averaging face shape and then blending red, green and blue intensity (RGB colour) across comparable pixels. The perceived age of these composite or blended images depended on the age bracket of the component faces. Blended faces were, however, rated younger than their component faces, a trend that became more marked with increased component age. The techniques used provide an empirical definition of facial changes with age that are biologically consistent across a sample population. The perceived age of a blend of old faces was increased by exaggerating the RGB colour differences of each pixel relative to a blend of young faces. This effect on perceived age was not attributable to enhanced contrast or colour saturation. Age-related visual cues defined from the differences between blends of young and old faces were applied to individual faces. These transformations increased perceived age." ] }
1703.08617
2950973664
Modeling the long-term facial aging process is extremely challenging due to the presence of large and non-linear variations during the face development stages. In order to efficiently address the problem, this work first decomposes the aging process into multiple short-term stages. Then, a novel generative probabilistic model, named Temporal Non-Volume Preserving (TNVP) transformation, is presented to model the facial aging process at each stage. Unlike Generative Adversarial Networks (GANs), which requires an empirical balance threshold, and Restricted Boltzmann Machines (RBM), an intractable model, our proposed TNVP approach guarantees a tractable density function, exact inference and evaluation for embedding the feature transformations between faces in consecutive stages. Our model shows its advantages not only in capturing the non-linear age related variance in each stage but also producing a smooth synthesis in age progression across faces. Our approach can model any face in the wild provided with only four basic landmark points. Moreover, the structure can be transformed into a deep convolutional network while keeping the advantages of probabilistic models with tractable log-likelihood density estimation. Our method is evaluated in both terms of synthesizing age-progressed faces and cross-age face verification and consistently shows the state-of-the-art results in various face aging databases, i.e. FG-NET, MORPH, AginG Faces in the Wild (AGFW), and Cross-Age Celebrity Dataset (CACD). A large-scale face verification on Megaface challenge 1 is also performed to further show the advantages of our proposed approach.
reconstruct the aging face from the combination of an aging basis in each group. @cite_6 proposed to build aging coupled dictionaries (CDL) to represent personalized aging pattern by preserving personalized facial features. @cite_10 proposed to model person-specific and age-specific factors separately via sparse representation hidden factor analysis (HFA).
{ "cite_N": [ "@cite_10", "@cite_6" ], "mid": [ "2227430255", "2950690021" ], "abstract": [ "Face aging simulation has received rising investigations nowadays, whereas it still remains a challenge to generate convincing and natural age-progressed face images. In this paper, we present a novel approach to such an issue using hidden factor analysis joint sparse representation. In contrast to the majority of tasks in the literature that integrally handle the facial texture, the proposed aging approach separately models the person-specific facial properties that tend to be stable in a relatively long period and the age-specific clues that gradually change over time. It then transforms the age component to a target age group via sparse reconstruction, yielding aging effects, which is finally combined with the identity component to achieve the aged face. Experiments are carried out on three face aging databases, and the results achieved clearly demonstrate the effectiveness and robustness of the proposed method in rendering a face with aging effects. In addition, a series of evaluations prove its validity with respect to identity preservation and aging effect generation.", "In this paper, we aim to automatically render aging faces in a personalized way. Basically, a set of age-group specific dictionaries are learned, where the dictionary bases corresponding to the same index yet from different dictionaries form a particular aging process pattern cross different age groups, and a linear combination of these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each subject may have extra personalized facial characteristics, e.g. mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular subject, yet much easier and more practical to get face pairs from neighboring age groups. Thus a personality-aware coupled reconstruction loss is utilized to learn the dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of our proposed solution over other state-of-the-arts in term of personalized aging progression, as well as the performance gain for cross-age face verification by synthesizing aging faces." ] }
1703.08617
2950973664
Modeling the long-term facial aging process is extremely challenging due to the presence of large and non-linear variations during the face development stages. In order to efficiently address the problem, this work first decomposes the aging process into multiple short-term stages. Then, a novel generative probabilistic model, named Temporal Non-Volume Preserving (TNVP) transformation, is presented to model the facial aging process at each stage. Unlike Generative Adversarial Networks (GANs), which requires an empirical balance threshold, and Restricted Boltzmann Machines (RBM), an intractable model, our proposed TNVP approach guarantees a tractable density function, exact inference and evaluation for embedding the feature transformations between faces in consecutive stages. Our model shows its advantages not only in capturing the non-linear age related variance in each stage but also producing a smooth synthesis in age progression across faces. Our approach can model any face in the wild provided with only four basic landmark points. Moreover, the structure can be transformed into a deep convolutional network while keeping the advantages of probabilistic models with tractable log-likelihood density estimation. Our method is evaluated in both terms of synthesizing age-progressed faces and cross-age face verification and consistently shows the state-of-the-art results in various face aging databases, i.e. FG-NET, MORPH, AginG Faces in the Wild (AGFW), and Cross-Age Celebrity Dataset (CACD). A large-scale face verification on Megaface challenge 1 is also performed to further show the advantages of our proposed approach.
Recently, are being developed to exploit the power of deep learning methods. @cite_22 employed Temporal Restricted Boltzmann Machines (TRBM) to model the non-linear aging process with geometry constraints and spatial DBMs to model a sequence of reference faces and wrinkles of adult faces. Similarly, @cite_9 modeled aging sequences using a recurrent neural network with a two-layer gated recurrent unit (GRU). Conditional Generative Adversarial Networks (cGAN) is also applied to synthesize aged images in @cite_4 .
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_22" ], "mid": [ "2473439532", "2951961735", "2951955952" ], "abstract": [ "Modeling the aging process of human face is important for cross-age face verification and recognition. In this paper, we introduce a recurrent face aging (RFA) framework based on a recurrent neural network which can identify the ages of people from 0 to 80. Due to the lack of labeled face data of the same person captured in a long range of ages, traditional face aging models usually split the ages into discrete groups and learn a one-step face feature transformation for each pair of adjacent age groups. However, those methods neglect the in-between evolving states between the adjacent age groups and the synthesized faces often suffer from severe ghosting artifacts. Since human face aging is a smooth progression, it is more appropriate to age the face by going through smooth transition states. In this way, the ghosting artifacts can be effectively eliminated and the intermediate aged faces between two discrete age groups can also be obtained. Towards this target, we employ a twolayer gated recurrent unit as the basic recurrent module whose bottom layer encodes a young face to a latent representation and the top layer decodes the representation to a corresponding older face. The experimental results demonstrate our proposed RFA provides better aging faces over other state-of-the-art age progression methods.", "It has been recently shown that Generative Adversarial Networks (GANs) can produce synthetic images of exceptional visual fidelity. In this work, we propose the GAN-based method for automatic face aging. Contrary to previous works employing GANs for altering of facial attributes, we make a particular emphasize on preserving the original person's identity in the aged version of his her face. To this end, we introduce a novel approach for \"Identity-Preserving\" optimization of GAN's latent vectors. The objective evaluation of the resulting aged and rejuvenated face images by the state-of-the-art face recognition and age estimation solutions demonstrate the high potential of the proposed method.", "Modeling the face aging process is a challenging task due to large and non-linear variations present in different stages of face development. This paper presents a deep model approach for face age progression that can efficiently capture the non-linear aging process and automatically synthesize a series of age-progressed faces in various age ranges. In this approach, we first decompose the long-term age progress into a sequence of short-term changes and model it as a face sequence. The Temporal Deep Restricted Boltzmann Machines based age progression model together with the prototype faces are then constructed to learn the aging transformation between faces in the sequence. In addition, to enhance the wrinkles of faces in the later age ranges, the wrinkle models are further constructed using Restricted Boltzmann Machines to capture their variations in different facial regions. The geometry constraints are also taken into account in the last step for more consistent age-progressed results. The proposed approach is evaluated using various face aging databases, i.e. FG-NET, Cross-Age Celebrity Dataset (CACD) and MORPH, and our collected large-scale aging database named AginG Faces in the Wild (AGFW). In addition, when ground-truth age is not available for input image, our proposed system is able to automatically estimate the age of the input face before aging process is employed." ] }
1703.08448
2950157285
We investigate a principle way to progressively mine discriminative object regions using classification networks to address the weakly-supervised semantic segmentation problems. Classification networks are only responsive to small and sparse discriminative regions from the object of interest, which deviates from the requirement of the segmentation task that needs to localize dense, interior and integral regions for pixel-wise inference. To mitigate this gap, we propose a new adversarial erasing approach for localizing and expanding object regions progressively. Starting with a single small object region, our proposed approach drives the classification network to sequentially discover new and complement object regions by erasing the current mined regions in an adversarial manner. These localized regions eventually constitute a dense and complete object region for learning semantic segmentation. To further enhance the quality of the discovered regions by adversarial erasing, an online prohibitive segmentation learning approach is developed to collaborate with adversarial erasing by providing auxiliary segmentation supervision modulated by the more reliable classification scores. Despite its apparent simplicity, the proposed approach achieves 55.0 and 55.7 mean Intersection-over-Union (mIoU) scores on PASCAL VOC 2012 val and test sets, which are the new state-of-the-arts.
To reduce the burden of pixel-level annotation, various weakly-supervised methods have been proposed for learning to perform semantic segmentation with coarser annotations. For example, Papandreou al @cite_30 and Dai al @cite_23 proposed to estimate segmentation using annotated bounding boxes. More recently, Lin al @cite_29 employed scribbles as supervision for semantic segmentation. @cite_16 , the required supervised information is further relaxed to instance points. All these annotations can be considered much simpler than pixel-level annotation.
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_16", "@cite_23" ], "mid": [ "1529410181", "2337429362", "", "1495267108" ], "abstract": [ "Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL", "Large-scale data is of crucial importance for learning semantic segmentation models, but annotating per-pixel masks is a tedious and inefficient procedure. We note that for the topic of interactive image segmentation, scribbles are very widely used in academic research and commercial software, and are recognized as one of the most userfriendly ways of interacting. In this paper, we propose to use scribbles to annotate images, and develop an algorithm to train convolutional networks for semantic segmentation supervised by scribbles. Our algorithm is based on a graphical model that jointly propagates information from scribbles to unmarked pixels and learns network parameters. We present competitive object semantic segmentation results on the PASCAL VOC dataset by using scribbles as annotations. Scribbles are also favored for annotating stuff (e.g., water, sky, grass) that has no well-defined shape, and our method shows excellent results on the PASCALCONTEXT dataset thanks to extra inexpensive scribble annotations. Our scribble annotations on PASCAL VOC are available at http: research.microsoft.com en-us um people jifdai downloads scribble_sup.", "", "Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with human-annotated, pixel-level segmentation masks. Such pixel-accurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called \"BoxSup\", produces competitive results (e.g., 62.0 mAP for validation) supervised by boxes only, on par with strong baselines (e.g., 63.8 mAP) fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further yields state-of-the-art results on PASCAL VOC 2012 and PASCAL-CONTEXT [26]." ] }
1703.08448
2950157285
We investigate a principle way to progressively mine discriminative object regions using classification networks to address the weakly-supervised semantic segmentation problems. Classification networks are only responsive to small and sparse discriminative regions from the object of interest, which deviates from the requirement of the segmentation task that needs to localize dense, interior and integral regions for pixel-wise inference. To mitigate this gap, we propose a new adversarial erasing approach for localizing and expanding object regions progressively. Starting with a single small object region, our proposed approach drives the classification network to sequentially discover new and complement object regions by erasing the current mined regions in an adversarial manner. These localized regions eventually constitute a dense and complete object region for learning semantic segmentation. To further enhance the quality of the discovered regions by adversarial erasing, an online prohibitive segmentation learning approach is developed to collaborate with adversarial erasing by providing auxiliary segmentation supervision modulated by the more reliable classification scores. Despite its apparent simplicity, the proposed approach achieves 55.0 and 55.7 mean Intersection-over-Union (mIoU) scores on PASCAL VOC 2012 val and test sets, which are the new state-of-the-arts.
Beyond mining foreground object regions, finding background localization cues is also crucial for training the segmentation network. Motivated by @cite_21 @cite_13 , we use the saliency detection technology @cite_14 to produce the saliency maps of training images. Based on the generated saliency maps, the regions whose pixels are with low saliency values are selected as background. Suppose @math denotes the selected background regions of @math . We can obtain the segmentation masks @math , where @math . We ignore three kinds of pixels for producing @math : 1) those erased foreground regions of different categories which are in conflict; 2) those low-saliency pixels which lie within the object regions identified by AE; 3) those pixels that are not assigned semantic labels. One example of the segmentation mask generation process is demonstrated in Figure 3 (a). black" and purple" regions refer to the background and the object, respectively.
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_13" ], "mid": [ "2161185676", "2133515615", "2951358285" ], "abstract": [ "Salient object detection has been attracting a lot of interest, and recently various heuristic computational models have been designed. In this paper, we regard saliency map computation as a regression problem. Our method, which is based on multi-level image segmentation, uses the supervised learning approach to map the regional feature vector to a saliency score, and finally fuses the saliency scores across multiple levels, yielding the saliency map. The contributions lie in two-fold. One is that we show our approach, which integrates the regional contrast, regional property and regional background ness descriptors together to form the master saliency map, is able to produce superior saliency maps to existing algorithms most of which combine saliency maps heuristically computed from different types of features. The other is that we introduce a new regional feature vector, background ness, to characterize the background, which can be regarded as a counterpart of the objectness descriptor [2]. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.", "Recently, significant improvement has been made on semantic object segmentation due to the development of deep convolutional neural networks (DCNNs). Training such a DCNN usually relies on a large number of images with pixel-level segmentation masks, and annotating these images is very costly in terms of both finance and human effort. In this paper, we propose a simple to complex (STC) framework in which only image-level annotations are utilized to learn DCNNs for semantic segmentation. Specifically, we first train an initial segmentation network called Initial-DCNN with the saliency maps of simple images (i.e., those with a single category of major object(s) and clean background). These saliency maps can be automatically obtained by existing bottom-up salient object detection techniques, where no supervision information is needed. Then, a better network called Enhanced-DCNN is learned with supervision from the predicted segmentation masks of simple images based on the Initial-DCNN as well as the image-level annotations. Finally, more pixel-level segmentation masks of complex images (two or more categories of objects with cluttered background), which are inferred by using Enhanced-DCNN and image-level annotations, are utilized as the supervision information to learn the Powerful-DCNN for semantic segmentation. Our method utilizes 40K simple images from Flickr.com and 10K complex images from PASCAL VOC for step-wisely boosting the segmentation network. Extensive experimental results on PASCAL VOC 2012 segmentation benchmark well demonstrate the superiority of the proposed STC framework compared with other state-of-the-arts.", "We introduce a new loss function for the weakly-supervised training of semantic image segmentation models based on three guiding principles: to seed with weak localization cues, to expand objects based on the information about which classes can occur in an image, and to constrain the segmentations to coincide with object boundaries. We show experimentally that training a deep convolutional neural network using the proposed loss function leads to substantially better segmentations than previous state-of-the-art methods on the challenging PASCAL VOC 2012 dataset. We furthermore give insight into the working mechanism of our method by a detailed experimental study that illustrates how the segmentation quality is affected by each term of the proposed loss function as well as their combinations." ] }
1703.08050
2949336803
By stacking layers of convolution and nonlinearity, convolutional networks (ConvNets) effectively learn from low-level to high-level features and discriminative representations. Since the end goal of large-scale recognition is to delineate complex boundaries of thousands of classes, adequate exploration of feature distributions is important for realizing full potentials of ConvNets. However, state-of-the-art works concentrate only on deeper or wider architecture design, while rarely exploring feature statistics higher than first-order. We take a step towards addressing this problem. Our method consists in covariance pooling, instead of the most commonly used first-order pooling, of high-level convolutional features. The main challenges involved are robust covariance estimation given a small sample of large-dimensional features and usage of the manifold structure of covariance matrices. To address these challenges, we present a Matrix Power Normalized Covariance (MPN-COV) method. We develop forward and backward propagation formulas regarding the nonlinear matrix functions such that MPN-COV can be trained end-to-end. In addition, we analyze both qualitatively and quantitatively its advantage over the well-known Log-Euclidean metric. On the ImageNet 2012 validation set, by combining MPN-COV we achieve over 4 , 3 and 2.5 gains for AlexNet, VGG-M and VGG-16, respectively; integration of MPN-COV into 50-layer ResNet outperforms ResNet-101 and is comparable to ResNet-152. The source code will be available on the project page: this http URL
In image classification the second-order pooling known as O @math P is proposed in @cite_11 . The O @math P computes non-central, second-order moments which is subject to matrix logarithm for representing free-form regions. In the context of classical image classification, @cite_15 propose second- and third-order pooling of hand-crafted features or their coding vectors. For the goal of counteracting correlated burstiness due to non-i.i.d. data, they apply power normalization of eigenvalues (ePN) to autocorrelation matrices or to the core tensors @cite_33 of the autocorrelation tensors. @cite_6 , Higher-order Kernel (HoK) descriptor is proposed for action recognition in videos. HoK concerns pooling of higher-order tensors of probability scores from pretrained ConvNets in video frames, which are subject to ePN and then fed to SVM classifiers. Our main differences from @cite_15 @cite_6 are (1) we develop an end-to-end MPN-COV method in deep ConvNet architecture, and verify that statistics higher than first-order is helpful for large-scale recognition; (2) we provide statistical, geometric and computational interpretations, explaining the mechanism underlying matrix power normalization.
{ "cite_N": [ "@cite_15", "@cite_33", "@cite_6", "@cite_11" ], "mid": [ "2324076434", "2013912476", "2950626649", "78159342" ], "abstract": [ "In object recognition, the Bag-of-Words model assumes: i) extraction of local descriptors from images, ii) embedding the descriptors by a coder to a given visual vocabulary space which results in mid-level features, iii) extracting statistics from mid-level features with a pooling operator that aggregates occurrences of visual words in images into signatures, which we refer to as First-order Occurrence Pooling. This paper investigates higher-order pooling that aggregates over co-occurrences of visual words. We derive Bag-of-Words with Higher-order Occurrence Pooling based on linearisation of Minor Polynomial Kernel, and extend this model to work with various pooling operators. This approach is then effectively used for fusion of various descriptor types. Moreover, we introduce Higher-order Occurrence Pooling performed directly on local image descriptors as well as a novel pooling operator that reduces the correlation in the image signatures. Finally, First-, Second-, and Third-order Occurrence Pooling are evaluated given various coders and pooling operators on several widely used benchmarks. The proposed methods are compared to other approaches such as Fisher Vector Encoding and demonstrate improved results.", "We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higher-order tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, first-order perturbation effects, etc., are analyzed. We investigate how tensor symmetries affect the decomposition and propose a multilinear generalization of the symmetric eigenvalue decomposition for pair-wise symmetric tensors.", "Most successful deep learning algorithms for action recognition extend models designed for image-based tasks such as object recognition to video. Such extensions are typically trained for actions on single video frames or very short clips, and then their predictions from sliding-windows over the video sequence are pooled for recognizing the action at the sequence level. Usually this pooling step uses the first-order statistics of frame-level action predictions. In this paper, we explore the advantages of using higher-order correlations; specifically, we introduce Higher-order Kernel (HOK) descriptors generated from the late fusion of CNN classifier scores from all the frames in a sequence. To generate these descriptors, we use the idea of kernel linearization. Specifically, a similarity kernel matrix, which captures the temporal evolution of deep classifier scores, is first linearized into kernel feature maps. The HOK descriptors are then generated from the higher-order co-occurrences of these feature maps, and are then used as input to a video-level classifier. We provide experiments on two fine-grained action recognition datasets and show that our scheme leads to state-of-the-art results.", "Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and max-pooling that together with appropriate non-linearities lead to state-of-the-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster." ] }
1703.08198
2604861954
Codd's relational model describes just one possible world. To better cope with incomplete information, extended database models allow several possible worlds. Vague tables are one such convenient extended model where attributes accept sets of possible values (e.g., the manager is either Jill or Bob). However, conceptual database design in such cases remains an open problem. In particular, there is no canonical definition of functional dependencies (FDs) over possible worlds (e.g., each employee has just one manager). We identify several desirable properties that the semantics of such FDs should meet including Armstrong's axioms, the independence from irrelevant attributes, seamless satisfaction and implied by strong satisfaction. We show that we can define FDs such that they have all our desirable properties over vague tables. However, we also show that no notion of FD can satisfy all our desirable properties over a more general model (disjunctive tables). Our work formalizes a trade-off between having a general model and having well-behaved FDs.
There is a large body of work on FDs for extended models. We can distinguish three main approaches: work that deals with incomplete data by using s or sets of possible values (including disjunctive databases) @cite_16 @cite_23 ; work that adds information other than values (such as possibilistic probabilistic databases) @cite_4 @cite_6 @cite_29 @cite_30 @cite_19 ; and work that does not deal with simple values (fuzzy databases, where values can be fuzzy functions) @cite_1 @cite_14 @cite_3 @cite_15 @cite_7 .
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_4", "@cite_7", "@cite_29", "@cite_1", "@cite_6", "@cite_3", "@cite_19", "@cite_23", "@cite_15", "@cite_16" ], "mid": [ "1585093348", "2132016107", "2006650729", "", "", "1990061404", "1605533566", "", "1986099962", "2535582266", "1990979778", "2045566234" ], "abstract": [ "The problem of maintaining consistency via functional dependencies (FDs) has been studied and analyzed extensively within traditional database settings. There have also been many probabilistic data models proposed in the past decades. However, the problem of maintaining consistency in probabilistic relations via FDs is still unclear. In this paper, we clarify the concept of FDs in probabilistic relations and present an efficient chase algorithm LPChase(r,F) for maintaining consistency of a probabilistic relation r with respect to an FD set F. LPChase(r,F) adopts a novel approach that uses Linear Programming (LP) method to modify the probability of data values in r. There are many benefits of our approach. First, LPChase(r,F) guarantees that the output result is always the minimal change to r. Second, assuming that the expected size of an active domain consisting data values with non-zero probability is fixed, we demonstrate the interesting result that the LP solving time in LPChase(r,F) decreases as the probabilistic data domains grow, and becomes negligible for large domain size. On the other hand, the I O time and modeling time become stable even when the domain size increases.", "A new approach, to measure normalization completeness for conceptual model, is introduced using quantitative fuzzy functionality in this paper. We measure the normalization completeness of the conceptual model in two steps. In the first step, different normalization techniques are analyzed up to Boyce Codd Normal Form (BCNF) to find the current normal form of the relation. In the second step, fuzzy membership values are used to scale the normal form between 0 and 1. Case studies to explain schema transformation rules and measurements. Normalization completeness is measured by considering completeness attributes, preventing attributes of the functional dependencies and total number of attributes such as if the functional dependency is non-preventing then the attributes of that functional dependency are completeness attributes. The attributes of functional dependency which prevent to go to the next normal form are called preventing attributes.", "This paper is situated in the area of fuzzy databases, i.e., databases containing imprecise information. More precisely, it deals with the notion of a functional dependency when data involved in this type of property are possibly imprecise. Contrary to previous works, the idea is not to fuzzify the concept of a functional dependency. In the view suggested here, we rather consider regular functional dependencies and we study the impact of the presence of such FDs on the insertion and handling of imprecise data.", "", "", "Abstract The paper contains an analysis of integrity constraints for -ary relations in fuzzy databases. Apart from dependencies between all attributes there may be also dependencies describing relationships of fewer attributes. However, there is no complete arbitrariness. Relationships comprising ( 1) attributes must not infringe integrity constraints of the -ary relation. The analysis is carried out using the theory of functional dependencies. In this paper, we assume that attribute values are represented by means of interval-valued possibility distributions. The notion of fuzzy functional dependency has been appropriately extended according to the representation of fuzzy data. The paper formulates the rules to which fuzzy functional dependencies between ( 1) attributes of the -ary relation must be subordinated.", "In this paper, we introduce a definition of the concept of a functional dependency (FD) in the context of databases containing ill-known attributes values represented by possibility distributions. Contrary to previous proposals, this definition is based on the possible worlds model and consists in viewing the satisfaction of an FD by a relation as an uncertain event whose possibility and necessity can be quantified. We give the principle of a method for incrementally computing the related possibility and necessity degrees and tackle the issue of tuple refinement in the presence of an FD.", "", "Vague information is common in many database applications due to intensive data dissemination arising from different pervasive computing sources, such as the high volume data obtained from sensor networks and mobile communications. In this paper, we utilize functional dependencies (FDs) and inclusion dependencies (INDs), which are the most fundamental integrity constraints that arise in practice in relational databases, to maintain the consistency of a vague database. First, we tackle the problem, given a vague relation r and a set of FDs F, of how to obtain the ''best'' approximation of r with respect to F when taking into account the median membership (m) and the imprecision membership (i) thresholds. Using these two thresholds of a vague set, we define the notion of mi-overlap between vague sets and a merge operation on r. Second, we consider, given a vague database d and a set of INDs N, how to obtain the minimal possible change in value-precision for d. Satisfaction of an FD in r is defined in terms of values being mi-overlapping while satisfaction of an IND in d is defined in terms of value-precision. We show that Lien's and Atzeni's axiom system is sound and complete for FDs being satisfied in vague relations and that 's axiom system is sound and complete for INDs being satisfied in vague databases. Finally, we study the chase procedure VChase(d,[email protected]?N) as a means to maintain consistency of d with respect to F and N. Our main result is that the output of the procedure is the most object-precise approximation of r with respect to F and the minimum value-precision change of d with respect to N. The complexity of VChase(r,F) is polynomial time in the sizes of r and F whereas the complexity of VChase(d,[email protected]?N) is exponential.", "We investigate the impact of uncertainty on relational data -base schema design. Uncertainty is modeled qualitatively by assigning to tuples a degree of possibility with which they occur, and assigning to functional dependencies a degree of certainty which says to which tuples they apply. A design theory is developed for possibilistic functional dependencies, including efficient axiomatic and algorithmic characterizations of their implication problem. Naturally, the possibility degrees of tuples result in a scale of different degrees of data redundancy. Scaled versions of the classical syntactic Boyce-Codd and Third Normal Forms are established and semantically justified in terms of avoiding data redundancy of different degrees. Classical decomposition and synthesis techniques are scaled as well. Therefore, possibilistic functional dependencies do not just enable designers to control the levels of data integrity and losslessness targeted but also to balance the classical trade-off between query and update efficiency. Extensive experiments confirm the efficiency of our framework and provide original insight into relational schema design.", "Handling missing data is widely studied to make proper replacement and reduce uncertainty of data. Several approaches have been proposed for providing the most possible results. However, few studies provide solutions to the problem of missing data in extended possibility-based fuzzy relational (EPFR) databases. This type of problem in the context of EPFR databases is difficult to resolve because of the complexity of the data involved. In this paper, we propose an approach of filling missing data and query processing of the databases. To obtain the rational predict of the missing data, we adopt a concept and measurement of proximate equality of tuples to define data operation and fuzzy functional dependency (FFD). We provide a method to predict the missing data and replace the data based on our proposal. The results of the missing value process preserve those FFDs that hold in the original database instance.", "Abstract Incomplete relations are relations which contain null values, whose meaning is “value is at present unknown”. Such relations give rise to two types of functional dependency (FD). The first type, called the strong FD (SFD), is satisfied in an incomplete relation if for all possible worlds of this relation the FD is satisfied in the standard way. The second type, called the weak FD (WFD), is satisfied in an incomplete relation if there exists a possible world of this relation in which the FD is satisfied in the standard way. We exhibit a sound and complete axiom system for both strong and weak FDs, which takes into account the interaction between SFDs and WFDs. An interesting feature of the combined axiom system is that it is not k -ary for any natural number k ⩾ 0. We show that the combined implication problem for SFDs and WFDs can be solved in time polynomial in the size of the input set of FDs. Finally, we show that Armstrong relations exist for SFDs and WFDs." ] }
1703.08198
2604861954
Codd's relational model describes just one possible world. To better cope with incomplete information, extended database models allow several possible worlds. Vague tables are one such convenient extended model where attributes accept sets of possible values (e.g., the manager is either Jill or Bob). However, conceptual database design in such cases remains an open problem. In particular, there is no canonical definition of functional dependencies (FDs) over possible worlds (e.g., each employee has just one manager). We identify several desirable properties that the semantics of such FDs should meet including Armstrong's axioms, the independence from irrelevant attributes, seamless satisfaction and implied by strong satisfaction. We show that we can define FDs such that they have all our desirable properties over vague tables. However, we also show that no notion of FD can satisfy all our desirable properties over a more general model (disjunctive tables). Our work formalizes a trade-off between having a general model and having well-behaved FDs.
In the first approach, it is common to consider the database as denoting a set of possible worlds. This line of research has applied the tools of modal logic to the study of data dependencies @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "1970372413" ], "abstract": [ "We study functional and multivalued dependencies over SQL tables with NOT NULL constraints. Under a no-information interpretation of null values we develop tools for reasoning. We further show that in the absence of NOT NULL constraints the associated implication problem is equivalent to that in propositional fragments of Priest's paraconsistent Logic of Paradox. Subsequently, we extend the equivalence to Boolean dependencies and to the presence of NOT NULL constraints using Schaerf and Cadoli's S-3 logics where S corresponds to the set of attributes declared NOT NULL. The findings also apply to Codd's interpretation \"value at present unknown\" utilizing a weak possible world semantics. Our results establish NOT NULL constraints as an effective mechanism to balance the expressiveness and tractability of consequence relations, and to control the degree by which the existing classical theory of data dependencies can be soundly approximated in practice." ] }
1703.08198
2604861954
Codd's relational model describes just one possible world. To better cope with incomplete information, extended database models allow several possible worlds. Vague tables are one such convenient extended model where attributes accept sets of possible values (e.g., the manager is either Jill or Bob). However, conceptual database design in such cases remains an open problem. In particular, there is no canonical definition of functional dependencies (FDs) over possible worlds (e.g., each employee has just one manager). We identify several desirable properties that the semantics of such FDs should meet including Armstrong's axioms, the independence from irrelevant attributes, seamless satisfaction and implied by strong satisfaction. We show that we can define FDs such that they have all our desirable properties over vague tables. However, we also show that no notion of FD can satisfy all our desirable properties over a more general model (disjunctive tables). Our work formalizes a trade-off between having a general model and having well-behaved FDs.
The second approach, based on possibilistic and probabilistic databases, can be considered an extension of the the possible worlds framework @cite_4 @cite_6 @cite_29 . However, Link and Prade @cite_23 assign possibilities to tuples, not to values. Based on these possibilistic tuples, possible worlds are generated as a nested chain: the smallest world is the one with only fully possible tuples; the largest world contains all tuples. Certainty degrees are attached to standard FDs, based on the possibility degree of the smallest world in which they are violated.
{ "cite_N": [ "@cite_23", "@cite_29", "@cite_4", "@cite_6" ], "mid": [ "2535582266", "", "2006650729", "1605533566" ], "abstract": [ "We investigate the impact of uncertainty on relational data -base schema design. Uncertainty is modeled qualitatively by assigning to tuples a degree of possibility with which they occur, and assigning to functional dependencies a degree of certainty which says to which tuples they apply. A design theory is developed for possibilistic functional dependencies, including efficient axiomatic and algorithmic characterizations of their implication problem. Naturally, the possibility degrees of tuples result in a scale of different degrees of data redundancy. Scaled versions of the classical syntactic Boyce-Codd and Third Normal Forms are established and semantically justified in terms of avoiding data redundancy of different degrees. Classical decomposition and synthesis techniques are scaled as well. Therefore, possibilistic functional dependencies do not just enable designers to control the levels of data integrity and losslessness targeted but also to balance the classical trade-off between query and update efficiency. Extensive experiments confirm the efficiency of our framework and provide original insight into relational schema design.", "", "This paper is situated in the area of fuzzy databases, i.e., databases containing imprecise information. More precisely, it deals with the notion of a functional dependency when data involved in this type of property are possibly imprecise. Contrary to previous works, the idea is not to fuzzify the concept of a functional dependency. In the view suggested here, we rather consider regular functional dependencies and we study the impact of the presence of such FDs on the insertion and handling of imprecise data.", "In this paper, we introduce a definition of the concept of a functional dependency (FD) in the context of databases containing ill-known attributes values represented by possibility distributions. Contrary to previous proposals, this definition is based on the possible worlds model and consists in viewing the satisfaction of an FD by a relation as an uncertain event whose possibility and necessity can be quantified. We give the principle of a method for incrementally computing the related possibility and necessity degrees and tackle the issue of tuple refinement in the presence of an FD." ] }
1703.08198
2604861954
Codd's relational model describes just one possible world. To better cope with incomplete information, extended database models allow several possible worlds. Vague tables are one such convenient extended model where attributes accept sets of possible values (e.g., the manager is either Jill or Bob). However, conceptual database design in such cases remains an open problem. In particular, there is no canonical definition of functional dependencies (FDs) over possible worlds (e.g., each employee has just one manager). We identify several desirable properties that the semantics of such FDs should meet including Armstrong's axioms, the independence from irrelevant attributes, seamless satisfaction and implied by strong satisfaction. We show that we can define FDs such that they have all our desirable properties over vague tables. However, we also show that no notion of FD can satisfy all our desirable properties over a more general model (disjunctive tables). Our work formalizes a trade-off between having a general model and having well-behaved FDs.
Work on fuzzy databases is of a different nature, in that database values are not considered atomic entities, but they are fuzzy (membership) functions over some base set @cite_10 . For instance, given a domain , the fuzzy values young and infant can be seen as functions giving a degree of membership to each value in Age . These functions can even be modified, e.g., the function very young can be considered as modification (a translation of sorts) of young . Thus, when considering an FD @math , and two tuples @math in some table, we usually need to decide whether @math (for some @math ). In the case of fuzzy databases (when @math and @math can be fuzzy functions), we need to rely on fuzzy logic to determine their degree of similarity---in other words, we have no equality relation as a crisp, binary relation @cite_24 . The exact concept of FD depends on the underlying fuzzy logic being adopted @cite_1 .
{ "cite_N": [ "@cite_24", "@cite_1", "@cite_10" ], "mid": [ "2017978889", "1990061404", "" ], "abstract": [ "This paper deals with the application of fuzzy logic in a relational database environment with the objective of capturing more meaning of the data. It is shown that with suitable interpretations for the fuzzy membership functions, a fuzzy relational data model can be used to represent ambiguities in data values as well as impreciseness in the association among them. Relational operators for fuzzy relations have been studied, and applicability of fuzzy logic in capturing integrity constraints has been investigated. By introducing a fuzzy resemblance measure EQUAL for comparing domain values, the definition of classical functional dependency has been generalized to fuzzy functional dependency (ffd). The implication problem of ffds has been examined and a set of sound and complete inference axioms has been proposed. Next, the problem of lossless join decomposition of fuzzy relations for a given set of fuzzy functional dependencies is investigated. It is proved that with a suitable restriction on EQUAL, the design theory of a classical relational database with functional dependencies can be extended to fuzzy relations satisfying fuzzy functional dependencies.", "Abstract The paper contains an analysis of integrity constraints for -ary relations in fuzzy databases. Apart from dependencies between all attributes there may be also dependencies describing relationships of fewer attributes. However, there is no complete arbitrariness. Relationships comprising ( 1) attributes must not infringe integrity constraints of the -ary relation. The analysis is carried out using the theory of functional dependencies. In this paper, we assume that attribute values are represented by means of interval-valued possibility distributions. The notion of fuzzy functional dependency has been appropriately extended according to the representation of fuzzy data. The paper formulates the rules to which fuzzy functional dependencies between ( 1) attributes of the -ary relation must be subordinated.", "" ] }
1703.07869
2952532887
Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering.
evaluated the effects of display size on UPR and found that, using a simulation, a tablet-sized display allows for significantly faster performance of a selection task compared to a handheld display and that UPR outperformed DPR for a selection task @cite_26 . They also prototyped a UPR system with geometric reconstruction using a Kinect for reconstruction of the physical surrounding and a Wiimote for head tracking. In follow up works, they proposed to replace an active depth sensor by gradient domain image-based rendering method combined with semi-dense stereo matching @cite_21 @cite_19 . Other authors also employ depth-sensors for scene reconstruction in UPR @cite_10 .
{ "cite_N": [ "@cite_19", "@cite_26", "@cite_10", "@cite_21" ], "mid": [ "", "2025200503", "2013520250", "2342944698" ], "abstract": [ "", "In this paper we present a user study evaluating the benefits of geometrically correct user-perspective rendering using an Augmented Reality (AR) magic lens. In simulation we compared a user-perspective magic lens against the common device-perspective magic lens on both phone-sized and tablet-sized displays. Our results indicate that a tablet-sized display allows for significantly faster performance of a selection task and that a user-perspective lens has benefits over a device-perspective lens for a selection task. Based on these promising results, we created a proof-of-concept prototype, engineered with current off-the-shelf devices and software. To our knowledge, this is the first geometrically correct user-perspective magic lens.", "In this paper, we propose an interaction system which displays see-through images on the mobile display and that allows a user to interact with virtual objects overlaid on the see-through image using the user's hand. In this system, the camera which tracks the user's viewpoint is attached to the front of the mobile display and the depth camera which captures color and depth images of the user's hand and the background scene is attached to the back of the mobile display. Natural interaction with virtual objects using the user's hand is realized by displaying images so that the appearance of a space through the mobile display is consistent with that of the real space from the user's viewpoint. We implemented two applications to the system and showed the usefulness of this system in various AR applications.", "We present a new approach to rendering a geometrically-correct user-perspective view for a magic lens interface, based on leveraging the gradients in the real world scene. Our approach couples a recent gradient-domain image-based rendering method with a novel semi-dense stereo matching algorithm. Our stereo algorithm borrows ideas from PatchMatch, and adapts them to semi-dense stereo. This approach is implemented in a prototype device build from off-the-shelf hardware, with no active depth sensing. Despite the limited depth data, we achieve high-quality rendering for the user-perspective magic lens." ] }
1703.08144
2726158836
This paper presents a statistical method for use in music transcription that can estimate score times of note onsets and offsets from polyphonic MIDI performance signals. Because performed note durations can deviate largely from score-indicated values, previous methods had the problem of not being able to accurately estimate offset score times (or note values) and, thus, could only output incomplete musical scores. Based on observations that the pitch context and onset score times are influential on the configuration of note values, we construct a context-tree model that provides prior distributions of note values using these features and combine it with a performance model in the framework of Markov random fields. Evaluation results show that our method reduces the average error rate by around 40 percent compared to existing simple methods. We also confirmed that, in our model, the score model plays a more important role than the performance model, and it automatically captures the voice structure by unsupervised learning.
There have been many studies on converting MIDI performance signals into a form of musical score. Older studies @cite_11 @cite_34 used rule-based methods and networks in attempts to model the process of human perception of musical rhythm. Since around 2000, various statistical models have been proposed to combine the statistical nature of note sequences in musical scores and that of temporal fluctuations in music performance. The most popular approach is to use hidden Markov models (HMMs) @cite_2 @cite_27 @cite_8 @cite_29 @cite_10 . The score is described either as a Markov process on beat positions (metrical Markov model) @cite_2 @cite_8 @cite_29 or a Markov model of notes (note Markov model) @cite_27 , and the performance model is often constructed as a state-space model with latent variables describing locally defined tempos. Recently a merged-output HMM incorporating the multiple-voice structure has been proposed @cite_10 . Temperley @cite_26 proposed a score model similar to the metrical Markov model in which the hierarchical metrical structure is explicitly described. There are also studies that investigated probabilistic context-free grammar models @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_8", "@cite_29", "@cite_27", "@cite_2", "@cite_34", "@cite_10", "@cite_11" ], "mid": [ "2063535335", "2150319510", "2400676457", "2123272248", "2169784142", "2024514957", "2120053230", "2583498619", "2149507868" ], "abstract": [ "This paper proposes a Bayesian approach for automatic music transcription of polyphonic MIDI signals based on generative modeling of onset occurrences of musical notes. Automatic music transcription involves two subproblems that are interdependent of each other: rhythm recognition and tempo estimation. When we listen to music, we are able to recognize its rhythm and tempo (or beat location) fairly easily even though there is ambiguity in determining the individual note values and tempo. This may be made possible through our empirical knowledge about rhythm patterns and tempo variations that possibly occur in music. To automate the process of recognizing the rhythm and tempo of music, we propose modeling the generative process of a MIDI signal of polyphonic music by combining the sub-process by which a musically natural tempo curve is generated and the sub-process by which a set of note onset positions is generated based on a 2-dimensional rhythm tree structure representation of music, and develop a parameter inference algorithm for the proposed model. We show some of the transcription results obtained with the present method.", "Abstract This article presents a probabilistic model of polyphonic music analysis. Taking a note pattern as input, the model combines three aspects of symbolic music analysis—metrical analysis, harmonic analysis, and stream segregation—into a single process, allowing it to capture the complex interactions between these structures. The model also yields an estimate of the probability of the note pattern itself; this has implications for the modelling of music transcription. I begin by describing the generative process that is assumed and the analytical process that is used to infer metrical, harmonic, and stream structures from a note pattern. I then present some tests of the model on metrical analysis and harmonic analysis, and discuss ongoing work to integrate the model into a transcription system.", "", "We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcription and are thus potentially useful in a number of music applications such as adaptive automatic accompaniment, score typesetting and music information retrieval.", "This paper describes a Hidden Markov Model (HMM)-based method of automatic transcription of MIDI (Musical Instrument Digital Interface) signals of performed music. The problem is formulated as recognition of a given sequence of fluctuating note durations to find the most likely intended note sequence utilizing the modern continuous speech recognition technique. Combining a stochastic model of deviating note durations and a stochastic grammar representing possible sequences of notes, the maximum likelihood estimate of the note sequence is searched in terms of Viterbi algorithm. The same principle is successfully applied to a joint problem of bar line allocation, time measure recognition, and tempo estimation. Finally, durations of consecutive spl eta n notes are combined to form a \"rhythm vector\" representing tempo-free relative durations of the notes and treated in the same framework. Significant improvements compared with conventional \"quantization\" techniques are shown.", "A method is presented for the rhythmic parsing problem: Given a sequence of observed musical note onset times, we simultaneously estimate the corresponding notated rhythm and tempo process. A graphical model is developed that represents the evolution of tempo and rhythm and relates these hidden quantities to an observable performance. The rhythm variables are discrete and the tempo and observation variables are continuous. We show how to compute the globally most likely configuration of the tempo and rhythm variables given an observation of note onset times. Experiments are presented on both MIDI data and a data set derived from an audio signal. A generalization to computing MAP estimates for arbitrary conditional Gaussian distributions is outlined.", "Musical time can be considered to be the product of two time scales: the discrete time intervals of a metrical structure and the continuous time scales of tempo changes and expressive timing (Clarke 1987a). In musical notation both kinds are present, although the notation of continuous time is less developed than that of metric time (often just a word like \"rubato\" or \"accelerando\" is notated in the score). In the experimental literature, different ways in which a musician can add continuous timing changes to the metrical score have been identified. There are systematic changes in certain rhythmic forms: for example, shortening triplets (Vos and Handel 1987) and timing differences occurring in voice leading with ensemble playing (Rasch 1979). Deliberate departures from metricality, such as rubato, seem to be used to emphasize musical struc- ture, as exemplified in the phrase-final lengthening principle formalized by Todd (1985). In addition to these effects, which are collectively called expressive timing, there are nonvoluntary effects, such as random timing errors caused by the limits in the accuracy of the motor system (Shaffer 1981) and errors in mental time-keeping processes (Vorberg and Hambuch 1978). These effects are generally rather small - in the order of 10-100 msec. To make sense of most musical styles, it is necessary to separate the discrete and continuous components of musical time. We will call this process of separation quantization, although the term is generally used to reflect only the extraction of a metrical score from a musical performance.", "In a recent conference paper, we have reported a rhythm transcription method based on a merged-output hidden Markov model HMM that explicitly describes the multiple-voice structure of polyphonic music. This model solves a major problem of conventional methods that could not properly describe the nature of multiple voices as in polyrhythmic scores or in the phenomenon of loose synchrony between voices. In this paper, we present a complete description of the proposed model and develop an inference technique, which is valid for any merged-output HMMs, for which output probabilities depend on past events. We also examine the influence of the architecture and parameters of the method in terms of accuracies of rhythm transcription and voice separation and perform comparative evaluations with six other algorithms. Using MIDI recordings of classical piano pieces, we found that the proposed model outperformed other methods by more than 12 points in the accuracy for polyrhythmic performances and performed almost as good as the best one for non-polyrhythmic performances. This reveals the state-of-the-art methods of rhythm transcription for the first time in the literature. Publicly available source codes are also provided for future comparisons.", "From the Publisher: Can humans compute? This is the question to which H. Christopher Longuet-Higgins, one of the founding figures of cognitive science, has devoted his research over the past twenty years. His and his field's intellectual odyssey from the fringe to the center of the scientific world's attention is recounted with wit and grace in this wide-ranging collection of previously published and original essays. The volume begins in the late 1960s, when the author had moved from theoretical chemistry to what was then known as theoretical biology. It traces his search for new concepts with which to establish a science of the mind, and it includes Longuet-Higgins's famous comment on the 1971 Lighthill Report in which he introduced the term \"cognitive science\" and sketched the possible components of the field. The essays are divided into five parts. The first, Generalities, explores the basic philosophical questions at the root of the new science. The essays on Music show the importance of the musical sense as a testing ground for understanding cognitive processes in general. The author's forays into Language describe some of the major early achievements in the now very active field of computational linguistics. The studies of Vision are all directed to the problem - crucial for the development of machine-vision systems - of inferring the structure of a scene from two views. The author suggests that the chapters on Memory \"be treated indulgently as the first attempt of a physical scientist to climb out of the mindless world of atoms and molecules into the real world of subjective experience.\" H. Christopher Longuet-Higgins is Royal Society ResearchProfessor at the University of Sussex. Mental Processes inaugurates the series Explorations in Cognitive Science, edited by Margaret Boden and co-sponsored by The MIT Press and The British Psychological Society. A Bradford Book." ] }
1703.08144
2726158836
This paper presents a statistical method for use in music transcription that can estimate score times of note onsets and offsets from polyphonic MIDI performance signals. Because performed note durations can deviate largely from score-indicated values, previous methods had the problem of not being able to accurately estimate offset score times (or note values) and, thus, could only output incomplete musical scores. Based on observations that the pitch context and onset score times are influential on the configuration of note values, we construct a context-tree model that provides prior distributions of note values using these features and combine it with a performance model in the framework of Markov random fields. Evaluation results show that our method reduces the average error rate by around 40 percent compared to existing simple methods. We also confirmed that, in our model, the score model plays a more important role than the performance model, and it automatically captures the voice structure by unsupervised learning.
A recent study @cite_10 reported results of systematic evaluation of (onset) rhythm transcription methods. Two data sets, polyrhythmic data and non-polyrhythmic data, were used and it was shown that HMM-based methods generally performed better than others and the merged-output HMM was most effective for polyrhythmic data. In addition to the accuracy of recognising onset beat positions, the metrical HMM has the advantage of being able to estimate metrical structure, i.e. the metre (duple or triple) and bar (or down beat) positions, and to avoid grammatically incorrect score representations that appeared in other HMMs.
{ "cite_N": [ "@cite_10" ], "mid": [ "2583498619" ], "abstract": [ "In a recent conference paper, we have reported a rhythm transcription method based on a merged-output hidden Markov model HMM that explicitly describes the multiple-voice structure of polyphonic music. This model solves a major problem of conventional methods that could not properly describe the nature of multiple voices as in polyrhythmic scores or in the phenomenon of loose synchrony between voices. In this paper, we present a complete description of the proposed model and develop an inference technique, which is valid for any merged-output HMMs, for which output probabilities depend on past events. We also examine the influence of the architecture and parameters of the method in terms of accuracies of rhythm transcription and voice separation and perform comparative evaluations with six other algorithms. Using MIDI recordings of classical piano pieces, we found that the proposed model outperformed other methods by more than 12 points in the accuracy for polyrhythmic performances and performed almost as good as the best one for non-polyrhythmic performances. This reveals the state-of-the-art methods of rhythm transcription for the first time in the literature. Publicly available source codes are also provided for future comparisons." ] }
1703.08144
2726158836
This paper presents a statistical method for use in music transcription that can estimate score times of note onsets and offsets from polyphonic MIDI performance signals. Because performed note durations can deviate largely from score-indicated values, previous methods had the problem of not being able to accurately estimate offset score times (or note values) and, thus, could only output incomplete musical scores. Based on observations that the pitch context and onset score times are influential on the configuration of note values, we construct a context-tree model that provides prior distributions of note values using these features and combine it with a performance model in the framework of Markov random fields. Evaluation results show that our method reduces the average error rate by around 40 percent compared to existing simple methods. We also confirmed that, in our model, the score model plays a more important role than the performance model, and it automatically captures the voice structure by unsupervised learning.
As mentioned above, there have been only a few studies that discussed the recognition of note values in addition to onset score times. @cite_27 applied a similar method of estimating onset score times to estimating note values of monophonic performances and reported that the recognition accuracy dropped from 97.3 's Melisma Analyzer @cite_26 , based on a statistical model, outputs estimated onset and offset beat positions together with voice information for polyphonic music. There, offset score times are chosen from one of the following tactus beats according to some probabilities, or chosen as the onset position of the next note of the same voice. The recognition accuracy of note values has not been reported.
{ "cite_N": [ "@cite_27", "@cite_26" ], "mid": [ "2169784142", "2150319510" ], "abstract": [ "This paper describes a Hidden Markov Model (HMM)-based method of automatic transcription of MIDI (Musical Instrument Digital Interface) signals of performed music. The problem is formulated as recognition of a given sequence of fluctuating note durations to find the most likely intended note sequence utilizing the modern continuous speech recognition technique. Combining a stochastic model of deviating note durations and a stochastic grammar representing possible sequences of notes, the maximum likelihood estimate of the note sequence is searched in terms of Viterbi algorithm. The same principle is successfully applied to a joint problem of bar line allocation, time measure recognition, and tempo estimation. Finally, durations of consecutive spl eta n notes are combined to form a \"rhythm vector\" representing tempo-free relative durations of the notes and treated in the same framework. Significant improvements compared with conventional \"quantization\" techniques are shown.", "Abstract This article presents a probabilistic model of polyphonic music analysis. Taking a note pattern as input, the model combines three aspects of symbolic music analysis—metrical analysis, harmonic analysis, and stream segregation—into a single process, allowing it to capture the complex interactions between these structures. The model also yields an estimate of the probability of the note pattern itself; this has implications for the modelling of music transcription. I begin by describing the generative process that is assumed and the analytical process that is used to infer metrical, harmonic, and stream structures from a note pattern. I then present some tests of the model on metrical analysis and harmonic analysis, and discuss ongoing work to integrate the model into a transcription system." ] }
1703.08002
2603646652
Despite the remarkable progress recently made in distant speech recognition, state-of-the-art technology still suffers from a lack of robustness, especially when adverse acoustic conditions characterized by non-stationary noises and reverberation are met. A prominent limitation of current systems lies in the lack of matching and communication between the various technologies involved in the distant speech recognition process. The speech enhancement and speech recognition modules are, for instance, often trained independently. Moreover, the speech enhancement normally helps the speech recognizer, but the output of the latter is not commonly used, in turn, to improve the speech enhancement. To address both concerns, we propose a novel architecture based on a network of deep neural networks, where all the components are jointly trained and better cooperate with each other thanks to a full communication scheme between them. Experiments, conducted using different datasets, tasks and acoustic conditions, revealed that the proposed framework can overtake other competitive solutions, including recent joint training approaches.
More recently, the joint training methods outlined in Sec. have gained considerable attention. This work can be considered as an evolution of such approaches, in which we employ a more advanced architecture based on a full communication between the DNNs. Similarly to this work, an iterative pipeline based on feeding the speech recognition output into a speech enhancement DNN has recently been proposed in @cite_12 @cite_22 . The main difference with our approach is that the later circumvent the chicken-and-egg problem by simply feeding the speech enhancement with the speech recognition alignments generated at the previous iteration, while our solution faces this issue by adopting the unrolling procedure over different interaction levels previously discussed.
{ "cite_N": [ "@cite_22", "@cite_12" ], "mid": [ "1482149378", "2245630067" ], "abstract": [ "Separation of speech embedded in non-stationary interference is a challenging problem that has recently seen dramatic improvements using deep network-based methods. Previous work has shown that estimating a masking function to be applied to the noisy spectrum is a viable approach that can be improved by using a signal-approximation based objective function. Better modeling of dynamics through deep recurrent networks has also been shown to improve performance. Here we pursue both of these directions. We develop a phase-sensitive objective function based on the signal-to-noise ratio (SNR) of the reconstructed signal, and show that in experiments it yields uniformly better results in terms of signal-to-distortion ratio (SDR). We also investigate improvements to the modeling of dynamics, using bidirectional recurrent networks, as well as by incorporating speech recognition outputs in the form of alignment vectors concatenated with the spectral input features. Both methods yield further improvements, pointing to tighter integration of recognition with separation as a promising future direction.", "Long Short-Term Memory (LSTM) recurrent neural network has proven effective in modeling speech and has achieved outstanding performance in both speech enhancement (SE) and automatic speech recognition (ASR). To further improve the performance of noise-robust speech recognition, a combination of speech enhancement and recognition was shown to be promising in earlier work. This paper aims to explore options for consistent integration of SE and ASR using LSTM networks. Since SE and ASR have different objective criteria, it is not clear what kind of integration would finally lead to the best word error rate for noise-robust ASR tasks. In this work, several integration architectures are proposed and tested, including: (1) a pipeline architecture of LSTM-based SE and ASR with sequence training, (2) an alternating estimation architecture, and (3) a multi-task hybrid LSTM network architecture. The proposed models were evaluated on the 2nd CHiME speech separation and recognition challenge task, and show significant improvements relative to prior results." ] }
1703.08139
2601694715
In the communication problem @math (universal relation) [KRW95], Alice and Bob respectively receive @math and @math in @math with the promise that @math . The last player to receive a message must output an index @math such that @math . We prove that the randomized one-way communication complexity of this problem in the public coin model is exactly @math bits for failure probability @math . Our lower bound holds even if promised @math . As a corollary, we obtain optimal lower bounds for @math -sampling in strict turnstile streams for @math , as well as for the problem of finding duplicates in a stream. Our lower bounds do not need to use large weights, and hold even if it is promised that @math at all points in the stream. Our lower bound demonstrates that any algorithm @math solving sampling problems in turnstile streams in low memory can be used to encode subsets of @math of certain sizes into a number of bits below the information theoretic minimum. Our encoder makes adaptive queries to @math throughout its execution, but done carefully so as to not violate correctness. This is accomplished by injecting random noise into the encoder's interactions with @math , which is loosely motivated by techniques in differential privacy. Our correctness analysis involves understanding the ability of @math to correctly answer adaptive queries which have positive but bounded mutual information with @math 's internal randomness, and may be of independent interest in the newly emerging area of adaptive data analysis with a theoretical computer science lens.
The question of whether @math -sampling is possible in low memory in turnstile streams was first asked in @cite_0 @cite_6 . The work @cite_6 was applied @math -sampling as a subroutine in approximating the cost of the Euclidean minimum spanning tree of a subset @math of a discrete geometric space subject to insertions and deletions. The algorithm given there used space @math bits to achieve failure probability @math (though it is likely that the space could be improved to @math with a worse failure probability, by replacing a subroutine used there with a more recent @math -estimation algorithm of @cite_20 ). As mentioned, the currently best known upper bound solves @math -sampling @math using @math bits @cite_29 , which Theorem shows is tight.
{ "cite_N": [ "@cite_0", "@cite_29", "@cite_20", "@cite_6" ], "mid": [ "2131726153", "", "2103126020", "2143606444" ], "abstract": [ "Emerging data stream management systems approach the challenge of massive data distributions which arrive at high speeds while there is only small storage by summarizing and mining the distributions using samples or sketches. However, data distributions can be \"viewed\" in different ways. A data stream of integer values can be viewed either as the forward distribution f (x), ie., the number of occurrences of x in the stream, or as its inverse, f-1 (i), which is the number of items that appear i times. While both such \"views\" are equivalent in stored data systems, over data streams that entail approximations, they may be significantly different. In other words, samples and sketches developed for the forward distribution may be ineffective for summarizing or mining the inverse distribution. Yet, many applications such as IP traffic monitoring naturally rely on mining inverse distributions.We formalize the problems of managing and mining inverse distributions and show provable differences between summarizing the forward distribution vs the inverse distribution. We present methods for summarizing and mining inverse distributions of data streams: they rely on a novel technique to maintain a dynamic sample over the stream with provable guarantees which can be used for variety of summarization tasks (building quantiles or equidepth histograms) and mining (anomaly detection: finding heavy hitters, and measuring the number of rare items), all with provable guarantees on quality of approximations and time space used by our streaming methods.We also complement our analytical and algorithmic results by presenting an experimental study of the methods over network data streams.", "", "We give the first optimal algorithm for estimating the number of distinct elements in a data stream, closing a long line of theoretical research on this problem begun by Flajolet and Martin in their seminal paper in FOCS 1983. This problem has applications to query optimization, Internet routing, network topology, and data mining. For a stream of indices in 1,...,n , our algorithm computes a (1 ± e)-approximation using an optimal O(1 e-2 + log(n)) bits of space with 2 3 success probability, where 0 We also give an algorithm to estimate the Hamming norm of a stream, a generalization of the number of distinct elements, which is useful in data cleaning, packet tracing, and database auditing. Our algorithm uses nearly optimal space, and has optimal O(1) update and reporting times.", "A dynamic geometric data stream is a sequence of m ADD REMOVE operations of points from a discrete geometric space 1,…, Δ d ?. ADD (p) inserts a point p from 1,…, Δ d into the current point set P, REMOVE(p) deletes p from P. We develop low-storage data structures to (i) maintain e-nets and e-approximations of range spaces of P with small VC-dimension and (ii) maintain a (1 + e)-approximation of the weight of the Euclidean minimum spanning tree of P. Our data structure for e-nets uses bits of memory and returns with probability 1 – δ a set of points that is an e-net for an arbitrary fixed finite range space with VC-dimension . Our data structure for e-approximations uses bits of memory and returns with probability 1 – δ a set of points that is an e-approximation for an arbitrary fixed finite range space with VC-dimension . The data structure for the approximation of the weight of a Euclidean minimum spanning tree uses O(log(1 δ)(log Δ e)O(d)) space and is correct with probability at least 1 – δ. Our results are based on a new data structure that maintains a set of elements chosen (almost) uniformly at random from P." ] }
1703.08252
2952908598
Estimating distributions of node characteristics (labels) such as number of connections or citizenship of users in a social network via edge and node sampling is a vital part of the study of complex networks. Due to its low cost, sampling via a random walk (RW) has been proposed as an attractive solution to this task. Most RW methods assume either that the network is undirected or that walkers can traverse edges regardless of their direction. Some RW methods have been designed for directed networks where edges coming into a node are not directly observable. In this work, we propose Directed Unbiased Frontier Sampling (DUFS), a sampling method based on a large number of coordinated walkers, each starting from a node chosen uniformly at random. It is applicable to directed networks with invisible incoming edges because it constructs, in real-time, an undirected graph consistent with the walkers trajectories, and due to the use of random jumps which prevent walkers from being trapped. DUFS generalizes previous RW methods and is suited for undirected networks and to directed networks regardless of in-edges visibility. We also propose an improved estimator of node label distributions that combines information from the initial walker locations with subsequent RW observations. We evaluate DUFS, compare it to other RW methods, investigate the impact of its parameters on estimation accuracy and provide practical guidelines for choosing them. In estimating out-degree distributions, DUFS yields significantly better estimates of the head of the distribution than other methods, while matching or exceeding estimation accuracy of the tail. Last, we show that DUFS outperforms uniform node sampling when estimating distributions of node labels of the top 10 largest degree nodes, even when sampling a node uniformly has the same cost as RW steps.
In the last decade, there has been a growing interest in graph sketching for processing massive networks. A sketch is a compact representation of data. Unlike a sample, a sketch is computed over the entire graph, that is observed as a data stream. For a survey on graph sketching techniques, please refer to @cite_24 .
{ "cite_N": [ "@cite_24" ], "mid": [ "2016289973" ], "abstract": [ "Over the last decade, there has been considerable interest in designing algorithms for processing massive graphs in the data stream model. The original motivation was two-fold: a) in many applications, the dynamic graphs that arise are too large to be stored in the main memory of a single machine and b) considering graph problems yields new insights into the complexity of stream computation. However, the techniques developed in this area are now finding applications in other areas including data structures for dynamic graphs, approximation algorithms, and distributed and parallel computation. We survey the state-of-the-art results; identify general techniques; and highlight some simple algorithms that illustrate basic ideas." ] }
1703.07645
2951518303
In this paper, we approach the problem of segmentation-free query-by-string word spotting for handwritten documents. In other words, we use methods inspired from computer vision and machine learning to search for words in large collections of digitized manuscripts. In particular, we are interested in historical handwritten texts, which are often far more challenging than modern printed documents. This task is important, as it provides people with a way to quickly find what they are looking for in large collections that are tedious and difficult to read manually. To this end, we introduce an end-to-end trainable model based on deep neural networks that we call Ctrl-F-Net. Given a full manuscript page, the model simultaneously generates region proposals, and embeds these into a distributed word embedding space, where searches are performed. We evaluate the model on common benchmarks for handwritten word spotting, outperforming the previous state-of-the-art segmentation-free approaches by a large margin, and in some cases even segmentation-based approaches. One interesting real-life application of our approach is to help historians to find and count specific words in court records that are related to women's sustenance activities and division of labor. We provide promising preliminary experiments that validate our method on this task.
The second category consists of methods based on connected components @cite_27 @cite_29 . These methods are typically based on binarizing the manuscript image, extracting connected components, and then grouping them in a bottom-up fashion following some heuristics, and then extracting bounding boxes. A similar approach is used in @cite_48 for matching entire documents using distributions of word images. While still producing too many regions, they are fewer compared to methods based on the sliding window and they are not sensitive to shifts in the input to the same extent.
{ "cite_N": [ "@cite_48", "@cite_27", "@cite_29" ], "mid": [ "2950585977", "2142636459", "" ], "abstract": [ "We address the problem of predicting similarity between a pair of handwritten document images written by different individuals. This has applications related to matching and mining in image collections containing handwritten content. A similarity score is computed by detecting patterns of text re-usages between document images irrespective of the minor variations in word morphology, word ordering, layout and paraphrasing of the content. Our method does not depend on an accurate segmentation of words and lines. We formulate the document matching problem as a structured comparison of the word distributions across two document images. To match two word images, we propose a convolutional neural network (CNN) based feature descriptor. Performance of this representation surpasses the state-of-the-art on handwritten word spotting. Finally, we demonstrate the applicability of our method on a practical problem of matching handwritten assignments.", "In this paper we propose a segmentation-free query by string word spotting method. Both the documents and query strings are encoded using a recently proposed word representation that projects images and strings into a common attribute space based on a Pyramidal Histogram of Characters (PHOC). These attribute models are learned using linear SVMs over the Fisher Vector [8] representation of the images along with the PHOC labels of the corresponding strings. In order to search through the whole page, document regions are indexed per character bi-gram using a similar attribute representation. On top of that, we propose an integral image representation of the document using a simplified version of the attribute model for efficient computation. Finally we introduce a re-ranking step in order to boost retrieval performance. We show state-of-the-art results for segmentation-free query by string word spotting in single-writer and multi-writer standard datasets.", "" ] }
1703.07570
2951087142
In this paper, we present a novel approach, called Deep MANTA (Deep Many-Tasks), for many-task vehicle analysis from a given image. A robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation. Its architecture is based on a new coarse-to-fine object proposal that boosts the vehicle detection. Moreover, the Deep MANTA network is able to localize vehicle parts even if these parts are not visible. In the inference, the network's outputs are used by a real time robust pose estimation algorithm for fine orientation estimation and 3D vehicle localization. We show in experiments that our method outperforms monocular state-of-the-art approaches on vehicle detection, orientation and 3D location tasks on the very challenging KITTI benchmark.
. To go further than 2D reasoning, several approaches are designed to detect vehicles in 3D space and are able to give a detailed 3D object representation. A part of them consists in fitting 3D models @cite_11 @cite_36 @cite_42 @cite_7 , active shape model @cite_40 @cite_6 @cite_24 @cite_16 @cite_22 or predicting 3D voxel patterns @cite_47 to recover the exact 3D pose and detailed object representation. These methods generally use an initialization step providing the 2D bounding box and the coarse viewpoint information. More recently, people have proposed to use 3D object proposals generated while using monocular images @cite_12 or disparity maps @cite_17 . In these approaches, 3D object proposals are projected in 2D bounding boxes and given to a CNN based detector which jointly predicts the class of the object proposal and the object fine orientation (using angle regression). In the proposed approach, vehicle fine orientation estimation is found using a robust 2D 3D vehicle part matching: the 2D 3D pose matrix is computed using all vehicle parts (visible or hidden) in contrast to other methods such as @cite_40 @cite_9 @cite_23 @cite_41 which focus on visible parts. That clearly increases the precision of orientation estimation.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_36", "@cite_41", "@cite_9", "@cite_42", "@cite_17", "@cite_6", "@cite_24", "@cite_40", "@cite_23", "@cite_47", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2065090505", "", "1895541646", "", "", "", "2184393491", "2145283077", "", "2020575505", "", "1946609740", "", "2468368736", "2071634722" ], "abstract": [ "In this work we seek to move away from the traditional paradigm for 2D object recognition whereby objects are identified in the image as 2D bounding boxes. We focus instead on: i) detecting objects; ii) identifying their 3D poses; iii) characterizing the geometrical and topological properties of the objects in terms of their aspect configurations in 3D. We call such characterization an object's aspect layout (see Fig. 1). We propose a new model for solving these problems in a joint fashion from a single image for object categories. Our model is constructed upon a novel framework based on conditional random fields with maximal margin parameter estimation. Extensive experiments are conducted to evaluate our model's performance in determining object pose and layout from images. We achieve superior viewpoint accuracy results on three public datasets and show extensive quantitative analysis to demonstrate the ability of accurately recovering the aspect layout of objects.", "", "Object class detection has been a synonym for 2D bounding box localization for the longest time, fueled by the success of powerful statistical learning techniques, combined with robust image representations. Only recently, there has been a growing interest in revisiting the promise of computer vision from the early days: to precisely delineate the contents of a visual scene, object by object, in 3D. In this paper, we draw from recent advances in object detection and 2D-3D object lifting in order to design an object class detector that is particularly tailored towards 3D object class detection. Our 3D object class detection method consists of several stages gradually enriching the object detection output with object viewpoint, keypoints and 3D shape estimates. Following careful design, in each stage it constantly improves the performance and achieves state-of-the-art performance in simultaneous 2D bounding box and viewpoint estimation on the challenging Pascal3D+ [50] dataset.", "", "", "", "The goal of this paper is to generate high-quality 3D object proposals in the context of autonomous driving. Our method exploits stereo imagery to place proposals in the form of 3D bounding boxes. We formulate the problem as minimizing an energy function encoding object size priors, ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. Combined with convolutional neural net (CNN) scoring, our approach outperforms all existing results on all three KITTI object classes.", "Despite the success of current state-of-the-art object class detectors, severe occlusion remains a major challenge. This is particularly true for more geometrically expressive 3D object class representations. While these representations have attracted renewed interest for precise object pose estimation, the focus has mostly been on rather clean datasets, where occlusion is not an issue. In this paper, we tackle the challenge of modeling occlusion in the context of a 3D geometric object class model that is capable of fine-grained, part-level 3D object reconstruction. Following the intuition that 3D modeling should facilitate occlusion reasoning, we design an explicit representation of likely geometric occlusion patterns. Robustness is achieved by pooling image evidence from of a set of fixed part detectors as well as a non-parametric representation of part configurations in the spirit of pose lets. We confirm the potential of our method on cars in a newly collected data set of inner-city street scenes with varying levels of occlusion, and demonstrate superior performance in occlusion estimation and part localization, compared to baselines that are unaware of occlusions.", "", "Geometric 3D reasoning has received renewed attention recently, in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative or coarse-grained quantitative representations. This is linked to the fact that today's object class detectors are tuned towards robust 2D matching rather than accurate 3D pose estimation, encouraged by 2D bounding box-based benchmarks such as Pascal VOC. In this paper, we therefore revisit ideas from the early days of computer vision, namely, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just 2D bounding boxes, including relative 3D positions of object parts. In combination with recent robust techniques for shape description and inference, our approach outperforms state-of-the-art results in 3D pose estimation, while at the same time improving 2D localization. In a series of experiments, we analyze our approach in detail, and demonstrate novel applications enabled by our geometric object class representation, such as fine-grained categorization of cars according to their 3D geometry and ultra-wide baseline matching.", "", "Despite the great progress achieved in recognizing objects as 2D bounding boxes in images, it is still very challenging to detect occluded objects and estimate the 3D properties of multiple objects from a single image. In this paper, we propose a novel object representation, 3D Voxel Pattern (3DVP), that jointly encodes the key properties of objects including appearance, 3D shape, viewpoint, occlusion and truncation. We discover 3DVPs in a data-driven way, and train a bank of specialized detectors for a dictionary of 3DVPs. The 3DVP detectors are capable of detecting objects with specific visibility patterns and transferring the meta-data from the 3DVPs to the detected objects, such as 2D segmentation mask, 3D pose as well as occlusion or truncation boundaries. The transferred meta-data allows us to infer the occlusion relationship among objects, which in turn provides improved object recognition results. Experiments are conducted on the KITTI detection benchmark [17] and the outdoor-scene dataset [41]. We improve state-of-the-art results on car detection and pose estimation with notable margins (6 in difficult data of KITTI). We also verify the ability of our method in accurately segmenting objects from the background and localizing them in 3D.", "", "The goal of this paper is to perform 3D object detection from a single monocular image in the domain of autonomous driving. Our method first aims to generate a set of candidate class-specific object proposals, which are then run through a standard CNN pipeline to obtain highquality object detections. The focus of this paper is on proposal generation. In particular, we propose an energy minimization approach that places object candidates in 3D using the fact that objects should be on the ground-plane. We then score each candidate box projected to the image plane via several intuitive potentials encoding semantic segmentation, contextual information, size and location priors and typical object shape. Our experimental evaluation demonstrates that our object proposal generation approach significantly outperforms all monocular approaches, and achieves the best detection performance on the challenging KITTI benchmark, among published monocular competitors.", "We address the problem of localizing and estimating the fine-pose of objects in the image with exact 3D models. Our main focus is to unify contributions from the 1970s with recent advances in object detection: use local keypoint detectors to find candidate poses and score global alignment of each candidate pose to the image. Moreover, we also provide a new dataset containing fine-aligned objects with their exactly matched 3D models, and a set of models for widely used objects. We also evaluate our algorithm both on object detection and fine pose estimation, and show that our method outperforms state-of-the art algorithms." ] }
1703.07726
2599105001
In economics and psychology, delay discounting is often used to characterize how individuals choose between a smaller immediate reward and a larger delayed reward. People with higher delay discounting rate (DDR) often choose smaller but more immediate rewards (a "today person"). In contrast, people with a lower discounting rate often choose a larger future rewards (a "tomorrow person"). Since the ability to modulate the desire of immediate gratification for long term rewards plays an important role in our decision-making, the lower discounting rate often predicts better social, academic and health outcomes. In contrast, the higher discounting rate is often associated with problematic behaviors such as alcohol drug abuse, pathological gambling and credit card default. Thus, research on understanding and moderating delay discounting has the potential to produce substantial societ al benefits.
There is a rich body of research in economics and behavior science that investigates the relationship between DDR and real-world human behaviors such as drug abuse @cite_26 , an obese @cite_18 , pathological gambling @cite_7 , drinking @cite_19 , smoking @cite_14 , addiction to the internet @cite_27 and credit card default @cite_24 . Such a study often involves a small number of participants (e.g. a few dozens or a few hundreds). Experiment data are often obtained using questionnaires, surveys or interviews. Statistical analysis such as correlation or regression analysis are often employed to study the relationship between different test variables (e.g., DDR and smoking habits).
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_26", "@cite_7", "@cite_24", "@cite_19", "@cite_27" ], "mid": [ "1997907521", "1968161699", "2113324723", "2016708334", "2166741051", "1999782698", "2103472510" ], "abstract": [ "Abstract Delay discounting (DD) is a measure of the degree to which an individual is driven by immediate gratification vs. the prospect of larger, but delayed, rewards. Because of hypothesized parallels between drug addiction and obesity, and reports of increased delay discounting in drug-dependent individuals, we hypothesized that obese individuals would show higher rates of discounting than controls. Obese and healthy-weight age-matched participants of both sexes completed two versions of a DD of money task, allowing us to calculate how subjective value of @math 50,000 declined as delay until hypothetical delivery increased from 2 weeks to 10 years. On both tasks, obese women (N = 29) showed greater delay discounting than control women did (N = 26; P values", "Rationale: Impulsivity is implicated in drug dependence. Recent studies show problems with alcohol and opioid dependence are associated with rapid discounting of the value of delayed outcomes. Furthermore, discounting may be particularly steep for the drug of dependence. Objectives: We determined if these findings could be extended to the behavior of cigarette smokers. In particular, we compared the discounting of hypothetical monetary outcomes by current, never, and ex-smokers of cigarettes. We also examined discounting of delayed hypothetical cigarettes by current smokers. Methods: Current cigarette smokers (n=23), never-smokers (n=22) and ex-smokers (n=21) indicated preference for immediate versus delayed money in a titration procedure that determined indifference points at various delays. The titration procedure was repeated with cigarettes for smokers. The degree to which the delayed outcomes were discounted was estimated with two non-linear decay models: an exponential model and a hyperbolic model. Results: Current smokers discounted the value of delayed money more than did the comparison groups. Never- and ex-smokers did not differ in their discounting of money. For current smokers, delayed cigarettes lost subjective value more rapidly than delayed money. The hyperbolic equation provided better fits to the data than did the exponential equation for 74 out of 89 comparisons. Conclusions: Cigarette smoking, like other forms of drug dependence, is characterized by rapid loss of subjective value for delayed outcomes, particularly for the drug of dependence. Never- and ex-smokers could discount similarly because cigarette smoking is associated with a reversible increase in discounting or due to selection bias.", "Behavioral economics examines conditions that influence the consumption of commodities and provides several concepts that may be instrumental in understanding drug dependence. One such concept of significance is that of how delayed reinforcers are discounted by drug dependent individuals. Discounting of delayed reinforcers refers to the observation that the value of a delayed reinforcer is discounted (reduced in value or considered to be worth less) compared to the value of an immediate reinforcer. This paper examines how delay discounting may provide an explanation of both impulsivity and loss of control exhibited by the drug dependent. In so doing, the paper reviews economic models of delay discounting, the empirical literature on the discounting of delayed reinforcers by the drug dependent and the scientific literature on personality assessments of impulsivity among drug-dependent individuals. Finally, future directions for the study of discounting are discussed, including the study of loss of control and loss aversion among drug-dependent individuals, the relationship of discounting to both the behavioral economic measure of elasticity as well as to outcomes observed in clinical settings, and the relationship between impulsivity and psychological disorders other than drug dependence.", "Research and clinical expertise indicates that impulsivity is an underlying feature of pathological gambling. This study examined the extent to which impulsive behavior, defined by the rate of discounting delayed monetary rewards, varies with pathological gambling severity, assessed by the South Oaks Gambling Screen (SOGS). Sixty-two pathological gamblers completed a delay discounting task, the SOGS, the Eysenck impulsivity scale, the Addiction Severity Index (ASI), and questions about gambling and substance use at intake to outpatient treatment for pathological gambling. In the delay discounting task, participants chose between a large delayed reward (US @math 1-$999) across a range of delays (6h to 25 years). The rate at which the delayed reward was discounted (k value) was derived for each participant and linear regression was used to identify the variables that predicted k values. Age, gender, years of education, substance abuse treatment history, and cigarette smoking history failed to significantly predict k values. Scores on the Eysenck impulsivity scale and the SOGS both accounted for a significant proportion of the variance in k values. The predictive value of the SOGS was 1.4 times that of the Eysenck scale. These results indicate that of the measures tested, gambling severity was the best single predictor of impulsive behavior in a delay discounting task in this sample of pathological gamblers.", "Laboratory and field studies of time preference find that discount rates are much greater in the short run than in the long run. Hyperbolic discount functions capture this property. This paper presents simulations of the savings and asset allocation choices of households with hyperbolic preferences. The behavior of the hyperbolic households is compared to the behavior of exponential households. The hyperbolic households borrow much more frequently in the revolving credit market. The hyperbolic households exhibit greater consumption income comovement and experience a greater drop in consumption around retirement. The hyperbolic simulations match observed consumption and balance sheet data much better than the exponential simulations.", "Aims To investigate whether adolescent heavy drinkers exhibit biased cognitive processing of alcohol-related cues and impulsive decision making. Design A between-subjects design was employed. Setting Classrooms in a single sixth-form college in Merseyside, UK. Participants Ninety adolescent students (mean age 16.83 years), of whom 38 were identified as heavy drinkers and 36 were identified as light drinkers, based on a tertile split of their weekly alcohol consumption. Measurements Participants provided information about alcohol consumption before completing measures of alcohol craving, delay discounting and an ‘alcohol Stroop’ in which they were required to name the colour in which alcohol-related and matched control words were printed. Findings Compared to light drinkers, heavy drinkers showed more pronounced discounting of delayed hypothetical monetary and alcohol rewards, which is indicative of a more short-term focus in decision making in heavy drinkers. Heavy drinkers were also slower to colour-name alcohol-related words, which indicates an attentional bias for alcohol-related cues. In all participants, measures of delay discounting and attentional bias were correlated moderately with each other, and also with the level of alcohol consumption and with alcohol craving. Conclusions In adolescents, heavy alcohol use is associated with biased attentional processing of alcohol-related cues and a shorter-term focus in decision making.", "To examine the relation between Internet addiction and delay discounting, we gave 276 college students a survey designed to measure Internet addiction and a paper-based delay-discounting task. In edour larger sample, we identified 14 students who met the criteria for Internet addiction; we also identified 14 matched controls who were similar to the Internet-addicted students in terms of gender, age, and grade point average. We then compared the extent to which these groups discounted delayed rewards. We found that Internet addicts discounted delayed rewards faster than non-Internet addicts. These results suggest that Internet addicts may be more impulsive than non-Internet addicts and that Internet addiction may share behavioral characteristics with other types of addiction." ] }
1703.07518
2949709872
Social media expose millions of users every day to information campaigns --- some emerging organically from grassroots activity, others sustained by advertising or other coordinated efforts. These campaigns contribute to the shaping of collective opinions. While most information campaigns are benign, some may be deployed for nefarious purposes. It is therefore important to be able to detect whether a meme is being artificially promoted at the very moment it becomes wildly popular. This problem has important social implications and poses numerous technical challenges. As a first step, here we focus on discriminating between trending memes that are either organic or promoted by means of advertisement. The classification is not trivial: ads cause bursts of attention that can be easily mistaken for those of organic trends. We designed a machine learning framework to classify memes that have been labeled as trending on Twitter.After trending, we can rely on a large volume of activity data. Early detection, occurring immediately at trending time, is a more challenging problem due to the minimal volume of activity data that is available prior to trending.Our supervised learning framework exploits hundreds of time-varying features to capture changing network and diffusion patterns, content and sentiment information, timing signals, and user meta-data. We explore different methods for encoding feature time series. Using millions of tweets containing trending hashtags, we achieve 75 AUC score for early detection, increasing to above 95 after trending. We evaluate the robustness of the algorithms by introducing random temporal shifts on the trend time series. Feature selection analysis reveals that content cues provide consistently useful signals; user features are more informative for early detection, while network and timing features are more helpful once more data is available.
Recent work on social media provides a better understanding of human communication dynamics such as collective attention and information diffusion @cite_68 , the emergence of trends @cite_16 @cite_9 , social influence and political mobilization @cite_80 @cite_18 @cite_58 @cite_66 .
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_58", "@cite_16", "@cite_80", "@cite_68", "@cite_66" ], "mid": [ "2028993035", "", "2165066692", "2127492100", "2033198212", "2072606289", "1971204783" ], "abstract": [ "Social movements rely in large measure on networked communication technologies to organize and disseminate information relating to the movements’ objectives. In this work we seek to understand how the goals and needs of a protest movement are reflected in the geographic patterns of its communication network, and how these patterns differ from those of stable political communication. To this end, we examine an online communication network reconstructed from over 600,000 tweets from a thirty-six week period covering the birth and maturation of the American anticapitalist movement, Occupy Wall Street. We find that, compared to a network of stable domestic political communication, the Occupy Wall Street network exhibits higher levels of locality and a hub and spoke structure, in which the majority of non-local attention is allocated to high-profile locations such as New York, California, and Washington D.C. Moreover, we observe that information flows across state boundaries are more likely to contain framing language and references to the media, while communication among individuals in the same state is more likely to reference protest action and specific places and times. Tying these results to social movement theory, we propose that these features reflect the movement’s efforts to mobilize resources at the local level and to develop narrative frames that reinforce collective purpose at the national level.", "", "We examine the temporal evolution of digital communication activity relating to the American anti-capitalist movement Occupy Wall Street. Using a high-volume sample from the microblogging site Twitter, we investigate changes in Occupy participant engagement, interests, and social connectivity over a fifteen month period starting three months prior to the movement's first protest action. The results of this analysis indicate that, on Twitter, the Occupy movement tended to elicit participation from a set of highly interconnected users with pre-existing interests in domestic politics and foreign social movements. These users, while highly vocal in the months immediately following the birth of the movement, appear to have lost interest in Occupy related communication over the remainder of the study period.", "Tracking new topics, ideas, and \"memes\" across the Web has been an issue of considerable interest. Recent work has developed methods for tracking topic shifts over long time scales, as well as abrupt spikes in the appearance of particular named entities. However, these approaches are less well suited to the identification of content that spreads widely and then fades over time scales on the order of days - the time scale at which we perceive news and events. We develop a framework for tracking short, distinctive phrases that travel relatively intact through on-line text; developing scalable algorithms for clustering textual variants of such phrases, we identify a broad class of memes that exhibit wide spread and rich variation on a daily basis. As our principal domain of study, we show how such a meme-tracking approach can provide a coherent representation of the news cycle - the daily rhythms in the news media that have long been the subject of qualitative interpretation but have never been captured accurately enough to permit actual quantitative analysis. We tracked 1.6 million mainstream media sites and blogs over a period of three months with the total of 90 million articles and we find a set of novel and persistent temporal patterns in the news cycle. In particular, we observe a typical lag of 2.5 hours between the peaks of attention to a phrase in the news media and in blogs respectively, with divergent behavior around the overall peak and a \"heartbeat\"-like pattern in the handoff between news and blogs. We also develop and analyze a mathematical model for the kinds of temporal variation that the system exhibits.", "Online social networks are everywhere. They must be influencing the way society is developing, but hard evidence is scarce. For instance, the relative effectiveness of online friendships and face-to-face friendships as drivers of social change is not known. In what may be the largest experiment ever conducted with human subjects, James Fowler and colleagues randomly assigned messages to 61 million Facebook users on Election Day in the United States in 2010, and tracked their behaviour both online and offline, using publicly available records. The results show that the messages influenced the political communication, information-seeking and voting behaviour of millions of people. Social messages had more impact than informational messages and 'weak ties' were much less likely than 'strong ties' to spread behaviour via the social network. Thus online mobilization works primarily through strong-tie networks that may exist offline but have an online representation.", "The wide adoption of social media has increased the competition among ideas for our finite attention. We employ a parsimonious agent-based model to study whether such a competition may affect the popularity of different memes, the diversity of information we are exposed to, and the fading of our collective interests for specific topics. Agents share messages on a social network but can only pay attention to a portion of the information they receive. In the emerging dynamics of information diffusion, a few memes go viral while most do not. The predictions of our model are consistent with empirical data from Twitter, a popular microblogging platform. Surprisingly, we can explain the massive heterogeneity in the popularity and persistence of memes as deriving from a combination of the competition for our limited attention and the structure of the social network, without the need to assume different intrinsic values among ideas.", "Social media represent powerful tools of mass communication and information diffusion. They played a pivotal role during recent social uprisings and political mobilizations across the world. Here we present a study of the Gezi Park movement in Turkey through the lens of Twitter. We analyze over 2.3 million tweets produced during the 25 days of protest occurred between May and June 2013. We first characterize the spatio-temporal nature of the conversation about the Gezi Park demonstrations, showing that similarity in trends of discussion mirrors geographic cues. We then describe the characteristics of the users involved in this conversation and what roles they played. We study how roles and individual influence evolved during the period of the upheaval. This analysis reveals that the conversation becomes more democratic as events unfold, with a redistribution of influence over time in the user population. We conclude by observing how the online and offline worlds are tightly intertwined, showing that exogenous events, such as political speeches or police actions, affect social media conversations and trigger changes in individual behavior." ] }