aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1511.02674 | 2950683323 | The state-of-the-art in semantic segmentation is currently represented by fully convolutional networks (FCNs). However, FCNs use large receptive fields and many pooling layers, both of which cause blurring and low spatial resolution in the deep layers. As a result FCNs tend to produce segmentations that are poorly localized around object boundaries. Prior work has attempted to address this issue in post-processing steps, for example using a color-based CRF on top of the FCN predictions. However, these approaches require additional parameters and low-level features that are difficult to tune and integrate into the original network architecture. Additionally, most CRFs use color-based pixel affinities, which are not well suited for semantic segmentation and lead to spatially disjoint predictions. To overcome these problems, we introduce a Boundary Neural Field (BNF), which is a global energy model integrating FCN predictions with boundary cues. The boundary information is used to enhance semantic segment coherence and to improve object localization. Specifically, we first show that the convolutional filters of semantic FCNs provide good features for boundary detection. We then employ the predicted boundaries to define pairwise potentials in our energy. Finally, we show that our energy decomposes semantic segmentation into multiple binary problems, which can be relaxed for efficient global optimization. We report extensive experiments demonstrating that minimization of our global boundary-based energy yields results superior to prior globalization methods, both quantitatively as well as qualitatively. | Spectral methods comprise one of the most prominent categories for boundary detection. In a typical spectral framework, one formulates a generalized eigenvalue system to solve a low-level pixel grouping problem. The resulting eigenvectors are then used to predict the boundaries. Some of the most notable approaches in this genre are MCG @cite_6 , gPb @cite_2 , PMI @cite_34 , and Normalized Cuts @cite_21 . A weakness of spectral approaches is that they tend to be slow as they perform a global inference over the entire image. | {
"cite_N": [
"@cite_34",
"@cite_21",
"@cite_6",
"@cite_2"
],
"mid": [
"105270443",
"2121947440",
"1991367009",
"2110158442"
],
"abstract": [
"Detecting boundaries between semantically meaningful objects in visual scenes is an important component of many vision algorithms. In this paper, we propose a novel method for detecting such boundaries based on a simple underlying principle: pixels belonging to the same object exhibit higher statistical dependencies than pixels belonging to different objects. We show how to derive an affinity measure based on this principle using pointwise mutual information, and we show that this measure is indeed a good predictor of whether or not two pixels reside on the same object. Using this affinity with spectral clustering, we can find object boundaries in the image – achieving state-of-the-art results on the BSDS500 dataset. Our method produces pixel-level accurate boundaries while requiring minimal feature engineering.",
"We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications."
]
} |
1511.02674 | 2950683323 | The state-of-the-art in semantic segmentation is currently represented by fully convolutional networks (FCNs). However, FCNs use large receptive fields and many pooling layers, both of which cause blurring and low spatial resolution in the deep layers. As a result FCNs tend to produce segmentations that are poorly localized around object boundaries. Prior work has attempted to address this issue in post-processing steps, for example using a color-based CRF on top of the FCN predictions. However, these approaches require additional parameters and low-level features that are difficult to tune and integrate into the original network architecture. Additionally, most CRFs use color-based pixel affinities, which are not well suited for semantic segmentation and lead to spatially disjoint predictions. To overcome these problems, we introduce a Boundary Neural Field (BNF), which is a global energy model integrating FCN predictions with boundary cues. The boundary information is used to enhance semantic segment coherence and to improve object localization. Specifically, we first show that the convolutional filters of semantic FCNs provide good features for boundary detection. We then employ the predicted boundaries to define pairwise potentials in our energy. Finally, we show that our energy decomposes semantic segmentation into multiple binary problems, which can be relaxed for efficient global optimization. We report extensive experiments demonstrating that minimization of our global boundary-based energy yields results superior to prior globalization methods, both quantitatively as well as qualitatively. | In comparison to prior deep learning approaches, our method offers several contributions. First, we exploit the inherent relationship between boundary detection and semantic segmentation to predict semantic boundaries. Specifically, we show that even though the semantic FCN has not been explicitly trained to predict boundaries, the convolutional filters inside the FCN provide good features for boundary detection. Additionally, unlike DeepEdge @cite_33 and @cite_20 , our method does not require a pre-processing step to select candidate contour points, as we predict boundaries on all pixels in the image. We demonstrate that our approach allows us to achieve state-of-the-art boundary detection results according to both F-score and Average Precision metrics. Additionally, due to the semantic nature of our boundaries, we can successfully use them as pairwise potentials for semantic segmentation in order to improve object localization and recover fine structural details, typically lost by pure FCN-based approaches. | {
"cite_N": [
"@cite_33",
"@cite_20"
],
"mid": [
"2949192504",
"1539790486"
],
"abstract": [
"Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection. We achieve this goal by means of a multi-scale deep network that consists of five convolutional layers and a bifurcated fully-connected sub-network. The section from the input layer to the fifth convolutional layer is fixed and directly lifted from a pre-trained network optimized over a large-scale object classification task. This section of the network is applied to four different scales of the image input. These four parallel and identical streams are then attached to a bifurcated sub-network consisting of two independently-trained branches. One branch learns to predict the contour likelihood (with a classification objective) whereas the other branch is trained to learn the fraction of human labelers agreeing about the contour presence at a given point (with a regression criterion). We show that without any feature engineering our multi-scale deep learning approach achieves state-of-the-art results in contour detection.",
"Most of the current boundary detection systems rely exclusively on low-level features, such as color and texture. However, perception studies suggest that humans employ object-level reasoning when judging if a particular pixel is a boundary. Inspired by this observation, in this work we show how to predict boundaries by exploiting object-level features from a pretrained object-classification network. Our method can be viewed as a \"High-for-Low\" approach where high-level object features inform the low-level boundary detection process. Our model achieves state-of-the-art performance on an established boundary detection benchmark and it is efficient to run. Additionally, we show that due to the semantic nature of our boundaries we can use them to aid a number of high-level vision tasks. We demonstrate that using our boundaries we improve the performance of state-of-the-art methods on the problems of semantic boundary labeling, semantic segmentation and object proposal generation. We can view this process as a \"Low-for-High'\" scheme, where low-level boundaries aid high-level vision tasks. Thus, our contributions include a boundary detection system that is accurate, efficient, generalizes well to multiple datasets, and is also shown to improve existing state-of-the-art high-level vision methods on three distinct tasks."
]
} |
1511.02674 | 2950683323 | The state-of-the-art in semantic segmentation is currently represented by fully convolutional networks (FCNs). However, FCNs use large receptive fields and many pooling layers, both of which cause blurring and low spatial resolution in the deep layers. As a result FCNs tend to produce segmentations that are poorly localized around object boundaries. Prior work has attempted to address this issue in post-processing steps, for example using a color-based CRF on top of the FCN predictions. However, these approaches require additional parameters and low-level features that are difficult to tune and integrate into the original network architecture. Additionally, most CRFs use color-based pixel affinities, which are not well suited for semantic segmentation and lead to spatially disjoint predictions. To overcome these problems, we introduce a Boundary Neural Field (BNF), which is a global energy model integrating FCN predictions with boundary cues. The boundary information is used to enhance semantic segment coherence and to improve object localization. Specifically, we first show that the convolutional filters of semantic FCNs provide good features for boundary detection. We then employ the predicted boundaries to define pairwise potentials in our energy. Finally, we show that our energy decomposes semantic segmentation into multiple binary problems, which can be relaxed for efficient global optimization. We report extensive experiments demonstrating that minimization of our global boundary-based energy yields results superior to prior globalization methods, both quantitatively as well as qualitatively. | The primary weakness of the above methods is that they are unable to recover from errors made by the segmentation algorithm. Several recent papers @cite_25 @cite_15 address this issue by proposing to use deep per-pixel CNN features and then classify each pixel as belonging to a certain class. While these approaches partially address the incorrect segmentation problem, they perform predictions independently on each pixel. This leads to extremely local predictions, where the relationships between pixels are not exploited in any way, and thus the resulting segmentations may be spatially disjoint. | {
"cite_N": [
"@cite_15",
"@cite_25"
],
"mid": [
"2022508996",
"1948751323"
],
"abstract": [
"Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.",
"Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline."
]
} |
1511.02674 | 2950683323 | The state-of-the-art in semantic segmentation is currently represented by fully convolutional networks (FCNs). However, FCNs use large receptive fields and many pooling layers, both of which cause blurring and low spatial resolution in the deep layers. As a result FCNs tend to produce segmentations that are poorly localized around object boundaries. Prior work has attempted to address this issue in post-processing steps, for example using a color-based CRF on top of the FCN predictions. However, these approaches require additional parameters and low-level features that are difficult to tune and integrate into the original network architecture. Additionally, most CRFs use color-based pixel affinities, which are not well suited for semantic segmentation and lead to spatially disjoint predictions. To overcome these problems, we introduce a Boundary Neural Field (BNF), which is a global energy model integrating FCN predictions with boundary cues. The boundary information is used to enhance semantic segment coherence and to improve object localization. Specifically, we first show that the convolutional filters of semantic FCNs provide good features for boundary detection. We then employ the predicted boundaries to define pairwise potentials in our energy. Finally, we show that our energy decomposes semantic segmentation into multiple binary problems, which can be relaxed for efficient global optimization. We report extensive experiments demonstrating that minimization of our global boundary-based energy yields results superior to prior globalization methods, both quantitatively as well as qualitatively. | The third and final group of semantic segmentation methods can be viewed as front-to-end schemes where segmentation maps are predicted directly from raw pixels without any intermediate steps. One of the earliest examples of such methods is the FCN introduced in @cite_7 . This approach gave rise to a number of subsequent related approaches which have improved various aspects of the original semantic segmentation @cite_29 @cite_5 @cite_31 @cite_32 @cite_27 . There have also been attempts at integrating the CRF mechanism into the network architecture @cite_29 @cite_5 . Finally, it has been shown that semantic segmentation can also be improved using additional training data in the form of bounding boxes @cite_31 . | {
"cite_N": [
"@cite_7",
"@cite_29",
"@cite_32",
"@cite_27",
"@cite_5",
"@cite_31"
],
"mid": [
"1903029394",
"1923697677",
"2949847866",
"",
"",
"2949086864"
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches even with much less training images with strong annotations in PASCAL VOC dataset.",
"",
"",
"Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with human-annotated, pixel-level segmentation masks. Such pixel-accurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called BoxSup, produces competitive results supervised by boxes only, on par with strong baselines fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further unleashes the power of deep convolutional networks and yields state-of-the-art results on PASCAL VOC 2012 and PASCAL-CONTEXT."
]
} |
1511.02872 | 2158766051 | Measuring the naturalness of images is important to generate realistic images or to detect unnatural regions in images. Additionally, a method to measure naturalness can be complementary to Convolutional Neural Network (CNN) based features, which are known to be insensitive to the naturalness of images. However, most probabilistic image models have insufficient capability of modeling the complex and abstract naturalness that we feel because they are built directly on raw image pixels. In this work, we assume that naturalness can be measured by the predictability on high-level features during eye movement. Based on this assumption, we propose a novel method to evaluate the naturalness by building a variant of Recurrent Neural Network Language Models on pre-trained CNN representations. Our method is applied to two tasks, demonstrating that 1) using our method as a regularizer enables us to generate more understandable images from image features than existing approaches, and 2) unnaturalness maps produced by our method achieve state-of-the-art eye fixation prediction performance on two well-studied datasets. | Theis and Bethge @cite_47 proposed a scalable image model using multi-dimensional LSTMs @cite_24 which predict pixel values of certain locations from preceding pixels. We also use RNN for predictions like theirs. However, to capture more high-level information, we train RNN on CNN representations, not on raw pixels. | {
"cite_N": [
"@cite_24",
"@cite_47"
],
"mid": [
"2170942820",
"2097039814"
],
"abstract": [
"Offline handwriting recognition—the automatic transcription of images of handwritten text—is a challenging task that combines computer vision with sequence learning. In most systems the two elements are handled separately, with sophisticated preprocessing techniques used to extract the image features and sequential models such as HMMs used to provide the transcriptions. By combining two recent innovations in neural networks—multidimensional recurrent neural networks and connectionist temporal classification—this paper introduces a globally trained offline handwriting recogniser that takes raw pixel data as input. Unlike competing systems, it does not require any alphabet specific preprocessing, and can therefore be used unchanged for any language. Evidence of its generality and power is provided by data from a recent international Arabic recognition competition, where it outperformed all entries (91.4 accuracy compared to 87.2 for the competition winner) despite the fact that neither author understands a word of Arabic.",
"Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multidimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting."
]
} |
1511.02872 | 2158766051 | Measuring the naturalness of images is important to generate realistic images or to detect unnatural regions in images. Additionally, a method to measure naturalness can be complementary to Convolutional Neural Network (CNN) based features, which are known to be insensitive to the naturalness of images. However, most probabilistic image models have insufficient capability of modeling the complex and abstract naturalness that we feel because they are built directly on raw image pixels. In this work, we assume that naturalness can be measured by the predictability on high-level features during eye movement. Based on this assumption, we propose a novel method to evaluate the naturalness by building a variant of Recurrent Neural Network Language Models on pre-trained CNN representations. Our method is applied to two tasks, demonstrating that 1) using our method as a regularizer enables us to generate more understandable images from image features than existing approaches, and 2) unnaturalness maps produced by our method achieve state-of-the-art eye fixation prediction performance on two well-studied datasets. | Several vision papers explicitly use LM. Wu al @cite_42 and Tirilly al @cite_10 trained LMs on quantized local descriptors or Visual Words . Although their approach is similar to ours, they used LMs for classification, not for measuring naturalness. Ranzato al @cite_3 trained a language model on a small region of videos which predicts the next time frame to learn spatial-temporal video representations. | {
"cite_N": [
"@cite_42",
"@cite_10",
"@cite_3"
],
"mid": [
"",
"2068396506",
"1568514080"
],
"abstract": [
"",
"In this paper, we propose two ways of improving image classification based on bag-of-words representation [25]. Two shortcomings of this representation are the loss of the spatial information of visual words and the presence of noisy visual words due to the coarseness of the vocabulary building process. On the one hand, we propose a new representation of images that goes further in the analogy with textual data: visual sentences, that allows us to \"read\" visual words in a certain order, as in the case of text. We can therefore consider simple spatial relations between words. We also present a new image classification scheme that exploits these relations. It is based on the use of language models, a very popular tool from speech and text analysis communities. On the other hand, we propose new techniques to eliminate useless words, one based on geometric properties of the keypoints, the other on the use of probabilistic Latent Semantic Analysis (pLSA). Experiments show that our techniques can significantly improve image classification, compared to a classical Support Vector Machine-based classification.",
"We propose a strong baseline model for unsupervised feature learning using video data. By learning to predict missing frames or extrapolate future frames from an input video sequence, the model discovers both spatial and temporal correlations which are useful to represent complex deformations and motion patterns. The models we propose are largely borrowed from the language modeling literature, and adapted to the vision domain by quantizing the space of image patches into a large dictionary. We demonstrate the approach on both a filling and a generation task. For the first time, we show that, after training on natural videos, such a model can predict non-trivial motions over short video sequences."
]
} |
1511.02872 | 2158766051 | Measuring the naturalness of images is important to generate realistic images or to detect unnatural regions in images. Additionally, a method to measure naturalness can be complementary to Convolutional Neural Network (CNN) based features, which are known to be insensitive to the naturalness of images. However, most probabilistic image models have insufficient capability of modeling the complex and abstract naturalness that we feel because they are built directly on raw image pixels. In this work, we assume that naturalness can be measured by the predictability on high-level features during eye movement. Based on this assumption, we propose a novel method to evaluate the naturalness by building a variant of Recurrent Neural Network Language Models on pre-trained CNN representations. Our method is applied to two tasks, demonstrating that 1) using our method as a regularizer enables us to generate more understandable images from image features than existing approaches, and 2) unnaturalness maps produced by our method achieve state-of-the-art eye fixation prediction performance on two well-studied datasets. | Mahendran and Vedaldi @cite_17 showed that an image can be reconstructed by gradient descent if the representation is extracted through differentiable functions. They also demonstrated that a natural image prior" is necessary to reconstruct interpretable images. They regularized reconstructed images to eliminate spikes in raw pixels and to be within the natural RGB range. Simonyan al @cite_31 adopted a similar approach and used @math regularization on images. | {
"cite_N": [
"@cite_31",
"@cite_17"
],
"mid": [
"2962851944",
"1915485278"
],
"abstract": [
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].",
"Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance."
]
} |
1511.02872 | 2158766051 | Measuring the naturalness of images is important to generate realistic images or to detect unnatural regions in images. Additionally, a method to measure naturalness can be complementary to Convolutional Neural Network (CNN) based features, which are known to be insensitive to the naturalness of images. However, most probabilistic image models have insufficient capability of modeling the complex and abstract naturalness that we feel because they are built directly on raw image pixels. In this work, we assume that naturalness can be measured by the predictability on high-level features during eye movement. Based on this assumption, we propose a novel method to evaluate the naturalness by building a variant of Recurrent Neural Network Language Models on pre-trained CNN representations. Our method is applied to two tasks, demonstrating that 1) using our method as a regularizer enables us to generate more understandable images from image features than existing approaches, and 2) unnaturalness maps produced by our method achieve state-of-the-art eye fixation prediction performance on two well-studied datasets. | Our reconstruction method is based on the work by Mahendran and Vedaldi @cite_17 . Instead of using a hand-crafted natural image prior, we use RNNLM trained on natural images as a regularizer. | {
"cite_N": [
"@cite_17"
],
"mid": [
"1915485278"
],
"abstract": [
"Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance."
]
} |
1511.02872 | 2158766051 | Measuring the naturalness of images is important to generate realistic images or to detect unnatural regions in images. Additionally, a method to measure naturalness can be complementary to Convolutional Neural Network (CNN) based features, which are known to be insensitive to the naturalness of images. However, most probabilistic image models have insufficient capability of modeling the complex and abstract naturalness that we feel because they are built directly on raw image pixels. In this work, we assume that naturalness can be measured by the predictability on high-level features during eye movement. Based on this assumption, we propose a novel method to evaluate the naturalness by building a variant of Recurrent Neural Network Language Models on pre-trained CNN representations. Our method is applied to two tasks, demonstrating that 1) using our method as a regularizer enables us to generate more understandable images from image features than existing approaches, and 2) unnaturalness maps produced by our method achieve state-of-the-art eye fixation prediction performance on two well-studied datasets. | Modeling visual attention is fundamentally important to efficiently process massive real-world data. Especially, a task to predict eye fixation points of humans has been examined extensively @cite_50 . | {
"cite_N": [
"@cite_50"
],
"mid": [
"2164084182"
],
"abstract": [
"Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future."
]
} |
1511.02872 | 2158766051 | Measuring the naturalness of images is important to generate realistic images or to detect unnatural regions in images. Additionally, a method to measure naturalness can be complementary to Convolutional Neural Network (CNN) based features, which are known to be insensitive to the naturalness of images. However, most probabilistic image models have insufficient capability of modeling the complex and abstract naturalness that we feel because they are built directly on raw image pixels. In this work, we assume that naturalness can be measured by the predictability on high-level features during eye movement. Based on this assumption, we propose a novel method to evaluate the naturalness by building a variant of Recurrent Neural Network Language Models on pre-trained CNN representations. Our method is applied to two tasks, demonstrating that 1) using our method as a regularizer enables us to generate more understandable images from image features than existing approaches, and 2) unnaturalness maps produced by our method achieve state-of-the-art eye fixation prediction performance on two well-studied datasets. | Bruce and Tsotsos @cite_46 demonstrated that eye fixation points can be predicted using Shannon's self-information". This information-theoretic view has been adopted for many research efforts @cite_50 . Our method also uses a kind of self-information. | {
"cite_N": [
"@cite_46",
"@cite_50"
],
"mid": [
"2139047169",
"2164084182"
],
"abstract": [
"A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in die primate visual cortex. It is further shown that the proposed salicney measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts.",
"Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future."
]
} |
1511.02680 | 2103328396 | We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixel-wise class labels with a measure of model uncertainty. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3 across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation. We also observe a significant improvement in performance for smaller datasets where modelling uncertainty is more effective. We benchmark Bayesian SegNet on the indoor SUN Scene Understanding and outdoor CamVid driving scenes datasets. | Semantic pixel labelling was initially approached with TextonBoost @cite_22 , TextonForest @cite_1 and Random Forest Based Classifiers @cite_18 . We are now seeing the emergence of deep learning architectures for pixel wise segmentation, following its success in object recognition for a whole image @cite_0 . Architectures such as SegNet @cite_15 Fully Convolutional Networks (FCN) @cite_21 and Dilation Network @cite_19 have been proposed, which we refer to as the . FCN is trained using stochastic gradient descent with a stage-wise training scheme. SegNet was the first architecture proposed that can be trained end-to-end in one step, due to its lower parameterisation. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_21",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_15"
],
"mid": [
"2060280062",
"2054279472",
"2952632681",
"2100588357",
"",
"2286929393",
"2963881378"
],
"abstract": [
"We propose a new method to quickly and accurately predict human pose---the 3D positions of body joints---from a single depth image, without depending on information from preceding frames. Our approach is strongly rooted in current object recognition strategies. By designing an intermediate representation in terms of body parts, the difficult pose estimation problem is transformed into a simpler per-pixel classification problem, for which efficient machine learning techniques exist. By using computer graphics to synthesize a very large dataset of training image pairs, one can train a classifier that estimates body part labels from test images invariant to pose, body shape, clothing, and other irrelevances. Finally, we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs in under 5ms on the Xbox 360. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state-of-the-art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching.",
"This paper details a new approach for learning a discriminative model of object classes, incorporating texture, layout, and context information efficiently. The learned model is used for automatic visual understanding and semantic segmentation of photographs. Our discriminative model exploits texture-layout filters, novel features based on textons, which jointly model patterns of texture and their spatial layout. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating the unary classifier in a conditional random field, which (i) captures the spatial interactions between class labels of neighboring pixels, and (ii) improves the segmentation of specific object instances. Efficient training of the model on large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy is demonstrated on four varied databases: (i) the MSRC 21-class database containing photographs of real objects viewed under general lighting conditions, poses and viewpoints, (ii) the 7-class Corel subset and (iii) the 7-class Sowerby database used in (Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 695---702, June 2004), and (iv) a set of video sequences of television shows. The proposed algorithm gives competitive and visually pleasing results for objects that are highly textured (grass, trees, etc.), highly structured (cars, faces, bicycles, airplanes, etc.), and even articulated (body, cow, etc.).",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"We propose semantic texton forests, efficient and powerful new low-level features. These are ensembles of decision trees that act directly on image pixels, and therefore do not need the expensive computation of filter-bank responses or local descriptors. They are extremely fast to both train and test, especially compared with k-means clustering and nearest-neighbor assignment of feature descriptors. The nodes in the trees provide (i) an implicit hierarchical clustering into semantic textons, and (ii) an explicit local classification estimate. Our second contribution, the bag of semantic textons, combines a histogram of semantic textons over an image region with a region prior category distribution. The bag of semantic textons is computed over the whole image for categorization, and over local rectangular regions for segmentation. Including both histogram and region prior allows our segmentation algorithm to exploit both textural and semantic context. Our third contribution is an image-level prior for segmentation that emphasizes those categories that the automatic categorization believes to be present. We evaluate on two datasets including the very challenging VOC 2007 segmentation dataset. Our results significantly advance the state-of-the-art in segmentation accuracy, and furthermore, our use of efficient decision forests gives at least a five-fold increase in execution speed.",
"",
"State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.",
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet ."
]
} |
1511.02680 | 2103328396 | We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixel-wise class labels with a measure of model uncertainty. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3 across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation. We also observe a significant improvement in performance for smaller datasets where modelling uncertainty is more effective. We benchmark Bayesian SegNet on the indoor SUN Scene Understanding and outdoor CamVid driving scenes datasets. | We have also seen methods which improve on these core segmentation engine architectures by adding post processing tools. HyperColumn @cite_40 and DeConvNet @cite_10 use region proposals to bootstrap their . DeepLab @cite_5 post-processes with conditional random fields (CRFs) and CRF-RNN @cite_26 use recurrent neural networks. These methods improve performance by smoothing the output and ensuring label consistency. However none of these proposed segmentation methods generate a probabilistic output with a measure of model uncertainty. | {
"cite_N": [
"@cite_5",
"@cite_40",
"@cite_26",
"@cite_10"
],
"mid": [
"1923697677",
"2953040121",
"2124592697",
"2952637581"
],
"abstract": [
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as feature representation. However, the information in this layer may be too coarse to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation[22], where we improve state-of-the-art from 49.7[22] mean AP^r to 60.0, keypoint localization, where we get a 3.3 point boost over[20] and part labeling, where we show a 6.6 point gain over a strong baseline.",
"Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.",
"We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network."
]
} |
1511.02680 | 2103328396 | We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixel-wise class labels with a measure of model uncertainty. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3 across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation. We also observe a significant improvement in performance for smaller datasets where modelling uncertainty is more effective. We benchmark Bayesian SegNet on the indoor SUN Scene Understanding and outdoor CamVid driving scenes datasets. | Neural networks which model uncertainty are known as Bayesian neural networks @cite_29 @cite_9 . They offer a probabilistic interpretation of deep learning models by inferring distributions over the networks’ weights. They are often computationally very expensive, increasing the number of model parameters without increasing model capacity significantly. Performing inference in Bayesian neural networks is a difficult task, and approximations to the model posterior are often used, such as variational inference @cite_28 . | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_29"
],
"mid": [
"2108677974",
"2111051539",
"2127538960"
],
"abstract": [
"Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus.",
"A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian \"evidence\" automatically embodies \"Occam's razor,\" penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.",
"(1) The outputs of a typical multi-output classification network do not satisfy the axioms of probability; probabilities should be positive and sum to one. This problem can be solved by treating the trained network as a preprocessor that produces a feature vector that can be further processed, for instance by classical statistical estimation techniques. (2) We present a method for computing the first two moments of the probability distribution indicating the range of outputs that are consistent with the input and the training data. It is particularly useful to combine these two ideas: we implement the ideas of section 1 using Parzen windows, where the shape and relative size of each window is computed using the ideas of section 2. This allows us to make contact between important theoretical ideas (e.g. the ensemble formalism) and practical techniques (e.g. back-prop). Our results also shed new light on and generalize the well-known \"softmax\" scheme."
]
} |
1511.02680 | 2103328396 | We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixel-wise class labels with a measure of model uncertainty. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3 across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation. We also observe a significant improvement in performance for smaller datasets where modelling uncertainty is more effective. We benchmark Bayesian SegNet on the indoor SUN Scene Understanding and outdoor CamVid driving scenes datasets. | On the other hand, the already significant parameterization of convolutional network architectures leaves them particularly susceptible to over-fitting without large amounts of training data. A technique known as is commonly used as a regularizer in convolutional neural networks to prevent overfitting and co-adaption of features @cite_37 . During training with stochastic gradient descent, randomly removes units within a network. By doing this it samples from a number of thinned networks with reduced width. At test time, standard dropout approximates the effect of averaging the predictions of all these thinnned networks by using the weights of the unthinned network. This is referred to as . | {
"cite_N": [
"@cite_37"
],
"mid": [
"2095705004"
],
"abstract": [
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets."
]
} |
1511.02680 | 2103328396 | We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixel-wise class labels with a measure of model uncertainty. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3 across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation. We also observe a significant improvement in performance for smaller datasets where modelling uncertainty is more effective. We benchmark Bayesian SegNet on the indoor SUN Scene Understanding and outdoor CamVid driving scenes datasets. | Gal and Ghahramani @cite_11 have cast dropout as approximate Bayesian inference over the network's weights. @cite_14 shows that dropout can be used at test time to impose a Bernoulli distribution over the convolutional net filter's weights, without requiring any additional model parameters. This is achieved by sampling the network with randomly dropped out units at test time. We can consider these as Monte Carlo samples obtained from the posterior distribution over models. This technique has seen success in modelling uncertainty for camera relocalisation @cite_30 . Here we apply it to pixel-wise semantic segmentation. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_11"
],
"mid": [
"2279895976",
"601603264",
"582134693"
],
"abstract": [
"We present a robust and real-time monocular six degree of freedom visual relocalization system. We use a Bayesian convolutional neural network to regress the 6-DOF camera pose from a single RGB image. It is trained in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking under 6ms to compute. It obtains approximately 2m and 6 degrees accuracy for very large scale outdoor scenes and 0.5m and 10 degrees accuracy indoors. Using a Bayesian convolutional neural network implementation we obtain an estimate of the model's relocalization uncertainty and improve state of the art localization accuracy on a large scale outdoor dataset. We leverage the uncertainty measure to estimate metric relocalization error and to detect the presence or absence of the scene in the input image. We show that the model's uncertainty is caused by images being dissimilar to the training dataset in either pose or appearance.",
"Convolutional neural networks (CNNs) work well on large datasets. But labelled data is hard to collect, and in some applications larger amounts of data are not available. The problem then is how to use CNNs with small data -- as CNNs overfit quickly. We present an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches. This is by placing a probability distribution over the CNN's kernels. We approximate our model's intractable posterior with Bernoulli variational distributions, requiring no additional model parameters. On the theoretical side, we cast dropout network training as approximate inference in Bayesian neural networks. This allows us to implement our model using existing tools in deep learning with no increase in time complexity, while highlighting a negative result in the field. We show a considerable improvement in classification accuracy compared to standard techniques and improve on published state-of-the-art results for CIFAR-10.",
"Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning."
]
} |
1511.02680 | 2103328396 | We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixel-wise class labels with a measure of model uncertainty. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3 across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation. We also observe a significant improvement in performance for smaller datasets where modelling uncertainty is more effective. We benchmark Bayesian SegNet on the indoor SUN Scene Understanding and outdoor CamVid driving scenes datasets. | We note that the probability distribution from Monte Carlo sampling is significantly different to the obtained from a softmax classifier. The softmax function approximates relative probabilities between the class labels, but not an overall measure of the model's uncertainty @cite_11 . Figure illustrates these differences. | {
"cite_N": [
"@cite_11"
],
"mid": [
"582134693"
],
"abstract": [
"Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning."
]
} |
1511.02136 | 2465015709 | We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on the GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks. | In this section we describe existing approaches to the problems of semi-supervised learning, graph classification, and edge classification, and discuss their relationship to DCNNs. Other researchers have investigated how CNNs can be extended from grid-structured to more general graph-structured data. @cite_9 propose a spatial method with ties to hierarchical clustering, where the layers of the network are defined via a hierarchical partitioning of the node set. In the same paper, the authors propose a spectral method that extends the notion of convolution to graph spectra. Later, @cite_1 applied these techniques to data where a graph is not immediately present but must be inferred. DCNNs, which fall within the spatial category, are distinct from this work because their parameterization makes them transferable; a DCNN learned on one graph can be applied to another. A related branch of work that has focused on extending convolutional neural networks to domains where the structure of the graph itself is of direct interest @cite_15 @cite_10 @cite_5 . For example, @cite_14 construct a deep convolutional model that learns real-valued fingerprint representation of chemical compounds. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_5",
"@cite_15",
"@cite_10"
],
"mid": [
"2173027866",
"1662382123",
"637153065",
"",
"2116341502",
""
],
"abstract": [
"We introduce a convolutional neural network that operates directly on graphs. These networks allow end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape. The architecture we present generalizes standard molecular feature extraction methods based on circular fingerprints. We show that these data-driven features are more interpretable, and have better predictive performance on a variety of tasks.",
"Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.",
"Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.",
"",
"Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.",
""
]
} |
1511.02136 | 2465015709 | We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on the GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks. | DCNNs also share strong ties to probabilistic relational models (PRMs), a family of graphical models that are capable of representing distributions over relational data @cite_12 . In contrast to PRMs, DCNNs are deterministic, which allows them to avoid the exponential blowup in learning and inference that hampers PRMs. | {
"cite_N": [
"@cite_12"
],
"mid": [
"1511986666"
],
"abstract": [
"Most tasks require a person or an automated system to reason -- to reach conclusions based on available information. The framework of probabilistic graphical models, presented in this book, provides a general approach for this task. The approach is model-based, allowing interpretable models to be constructed and then manipulated by reasoning algorithms. These models can also be learned automatically from data, allowing the approach to be used in cases where manually constructing a model is difficult or even impossible. Because uncertainty is an inescapable aspect of most real-world applications, the book focuses on probabilistic models, which make the uncertainty explicit and provide models that are more faithful to reality. Probabilistic Graphical Models discusses a variety of models, spanning Bayesian networks, undirected Markov networks, discrete and continuous models, and extensions to deal with dynamical systems and relational data. For each class of models, the text describes the three fundamental cornerstones: representation, inference, and learning, presenting both basic concepts and advanced techniques. Finally, the book considers the use of the proposed framework for causal reasoning and decision making under uncertainty. The main text in each chapter provides the detailed technical development of the key ideas. Most chapters also include boxes with additional material: skill boxes, which describe techniques; case study boxes, which discuss empirical cases related to the approach described in the text, including applications in computer vision, robotics, natural language understanding, and computational biology; and concept boxes, which present significant concepts drawn from the material in the chapter. Instructors (and readers) can group chapters in various combinations, from core topics to more technically advanced material, to suit their particular needs."
]
} |
1511.02136 | 2465015709 | We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on the GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks. | Our results suggest that DCNNs outperform partially-observed conditional random fields, the state-of-the-art model probabilistic relational model for semi-supervised learning. Furthermore, DCNNs offer this performance at considerably lower computational cost. Learning the parameters of both DCNNs and partially-observed CRFs involves numerically minimizing a nonconvex objective -- the backpropagated error in the case of DCNNs and the negative marginal log-likelihood for CRFs. In practice, the marginal log-likelihood of a partially-observed CRF is computed using a contrast-of-partition-functions approach that requires running loopy belief propagation twice; once on the entire graph and once with the observed labels fixed @cite_0 . This algorithm, and thus each step in the numerical optimization, has exponential time complexity @math where @math is the size of the maximal clique in @math @cite_11 . In contrast, the learning subroutine for an DCNN requires only one forward and backward pass for each instance in the training data. The complexity is dominated by the matrix multiplication between the graph definition matrix @math and the design matrix @math , giving an overall polynomial complexity of @math . | {
"cite_N": [
"@cite_0",
"@cite_11"
],
"mid": [
"2114930007",
"1775188621"
],
"abstract": [
"Conditional Random Fields (CRFs) are an effective tool for a variety of different data segmentation and labeling tasks including visual scene interpretation, which seeks to partition images into their constituent semantic-level regions and assign appropriate class labels to each region. For accurate labeling it is important to capture the global context of the image as well as local information. We introduce a CRF based scene labeling model that incorporates both local features and features aggregated over the whole image or large sections of it. Secondly, traditional CRF learning requires fully labeled datasets which can be costly and troublesome to produce. We introduce a method for learning CRFs from datasets with many unlabeled nodes by marginalizing out the unknown labels so that the log-likelihood of the known ones can be maximized by gradient ascent. Loopy Belief Propagation is used to approximate the marginals needed for the gradient and log-likelihood calculations and the Bethe free-energy approximation to the log-likelihood is monitored to control the step size. Our experimental results show that effective models can be learned from fragmentary labelings and that incorporating top-down aggregate features significantly improves the segmentations. The resulting segmentations are compared to the state-of-the-art on three different image datasets.",
"Conditional Random Fields (CRFs) are widely known to scale poorly, particularly for tasks with large numbers of states or with richly connected graphical structures. This is a consequence of inference having a time complexity which is at best quadratic in the number of states. This paper describes a novel parameterisation of the CRF which ties the majority of clique potentials, while allowing individual potentials for a subset of the labellings. This has two beneficial effects: the parameter space of the model (and thus the propensity to over-fit) is reduced, and the time complexity of training and decoding becomes sub-quadratic. On a standard natural language task, we reduce CRF training time four-fold, with no loss in accuracy. We also show how inference can be performed efficiently in richly connected graphs, in which current methods are intractable."
]
} |
1511.02136 | 2465015709 | We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on the GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks. | Kernel methods define similarity measures either between nodes (so-called kernels on graphs) @cite_13 or between graphs (graph kernels) and these similarities can serve as a basis for prediction via the kernel trick. Note that kernels on graphs', which are concerned with nodes, should not be confused with graph kernels', which are concerned with whole graphs. The performance of graph kernels can be improved by decomposing a graph into substructures, treating those substructures as a words in a sentence, and fitting a word-embedding model to obtain a vectorization @cite_6 . | {
"cite_N": [
"@cite_13",
"@cite_6"
],
"mid": [
"2105295920",
"2008857988"
],
"abstract": [
"This paper presents a survey as well as an empirical comparison and evaluation of seven kernels on graphs and two related similarity matrices, that we globally refer to as ''kernels on graphs'' for simplicity. They are the exponential diffusion kernel, the Laplacian exponential diffusion kernel, the von Neumann diffusion kernel, the regularized Laplacian kernel, the commute-time (or resistance-distance) kernel, the random-walk-with-restart similarity matrix, and finally, a kernel first introduced in this paper (the regularized commute-time kernel) and two kernels defined in some of our previous work and further investigated in this paper (the Markov diffusion kernel and the relative-entropy diffusion matrix). The kernel-on-graphs approach is simple and intuitive. It is illustrated by applying the nine kernels to a collaborative-recommendation task, viewed as a link prediction problem, and to a semisupervised classification task, both on several databases. The methods compute proximity measures between nodes that help study the structure of the graph. Our comparisons suggest that the regularized commute-time and the Markov diffusion kernels perform best on the investigated tasks, closely followed by the regularized Laplacian kernel.",
"In this paper, we present Deep Graph Kernels, a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels."
]
} |
1511.02126 | 2171427884 | Deep ConvNets have shown its good performance in image classification tasks. However it still remains as a problem in deep video representation for action recognition. The problem comes from two aspects: on one hand, current video ConvNets are relatively shallow compared with image ConvNets, which limits its capability of capturing the complex video action information; on the other hand, temporal information of videos is not properly utilized to pool and encode the video sequences. Towards these issues, in this paper, we utilize two state-of-the-art ConvNets, i.e., the very deep spatial net (VGGNet) and the temporal net from Two-Stream ConvNets, for action representation. The convolutional layers and the proposed new layer, called frame-diff layer, are extracted and pooled with two temporal pooling strategy: Trajectory pooling and line pooling. The pooled local descriptors are then encoded with VLAD to form the video representations. In order to verify the effectiveness of the proposed framework, we conduct experiments on UCF101 and HMDB51 datasets. It achieves the accuracy of 93.78 on UCF101 which is the state-of-the-art and the accuracy of 65.62 on HMDB51 which is comparable to the state-of-the-art. | Analogy to image classification, early researches of action recognition widely used local descriptors with BOF model such as 3D Histogram of Gradient (HOG3D) @cite_38 , and Extended SURF (ESURF) @cite_9 . The difference between images is that these local descriptors are extracted and pooled over sparse spatial-temporal interesting points. In @cite_30 , Harris3D detector is used to detect the informative regions and the interesting points are described by Histogram of Gradient (HOG) and Histogram of Optical Flow (HOF) @cite_30 . In @cite_29 , Sift key points and corresponding optical flows of the same scale are detected and extracted. Then they are described by HOG and HOF respectively. Instead of computing local features over spatial-temporal cuboids, the state-of-the-art local features (i.e., iDT) @cite_8 detects the dense point trajectories and then pools local features along the trajectories to form local descriptors with HOG, HOF and Motion Boundary Histogram (MBH). Fisher vector is then used to aggregate these local descriptors over the whole video into a global super vector. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_8",
"@cite_9",
"@cite_29"
],
"mid": [
"2142194269",
"2024868105",
"2105101328",
"1534763723",
"1513753641"
],
"abstract": [
"The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.",
"In this work, we present a novel local descriptor for video sequences. The proposed descriptor is based on histograms of oriented 3D spatio-temporal gradients. Our contribution is four-fold. (i) To compute 3D gradients for arbitrary scales, we develop a memory-efficient algorithm based on integral videos. (ii) We propose a generic 3D orientation quantization which is based on regular polyhedrons. (iii) We perform an in-depth evaluation of all descriptor parameters and optimize them for action recognition. (iv) We apply our descriptor to various action datasets (KTH, Weizmann, Hollywood) and show that we outperform the state-of-the-art.",
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"Over the years, several spatio-temporal interest point detectors have been proposed. While some detectors can only extract a sparse set of scale-invariant features, others allow for the detection of a larger amount of features at user-defined scales. This paper presents for the first time spatio-temporal interest points that are at the same time scale-invariant (both spatially and temporally) and densely cover the video content. Moreover, as opposed to earlier work, the features can be computed efficiently. Applying scale-space theory, we show that this can be achieved by using the determinant of the Hessian as the saliency measure. Computations are speeded-up further through the use of approximative box-filter operations on an integral video structure. A quantitative evaluation and experimental results on action recognition show the strengths of the proposed detector in terms of repeatability, accuracy and speed, in comparison with previously proposed detectors.",
"The goal of this paper is to build robust human action recognition for real world surveillance videos. Local spatio-temporal features around interest points provide compact but descriptive representations for video analysis and motion recognition. Current approaches tend to extend spatial descriptions by adding a temporal component for the appearance descriptor, which only implicitly captures motion information. We propose an algorithm called MoSIFT, which detects interest points and encodes not only their local appearance but also explicitly models local motion. The idea is to detect distinctive local features through local appearance and motion. We construct MoSIFT feature descriptors in the spirit of the well-known SIFT descriptors to be robust to small deformations through grid aggregation. We also introduce a bigram model to construct a correlation between local features to capture the more global structure of actions. The method advances the state of the art result on the KTH dataset to an accuracy of 95.8 . We also applied our approach to 100 hours of surveillance data as part of the TRECVID Event Detection task with very promising results on recognizing human actions in the real world surveillance videos."
]
} |
1511.02126 | 2171427884 | Deep ConvNets have shown its good performance in image classification tasks. However it still remains as a problem in deep video representation for action recognition. The problem comes from two aspects: on one hand, current video ConvNets are relatively shallow compared with image ConvNets, which limits its capability of capturing the complex video action information; on the other hand, temporal information of videos is not properly utilized to pool and encode the video sequences. Towards these issues, in this paper, we utilize two state-of-the-art ConvNets, i.e., the very deep spatial net (VGGNet) and the temporal net from Two-Stream ConvNets, for action representation. The convolutional layers and the proposed new layer, called frame-diff layer, are extracted and pooled with two temporal pooling strategy: Trajectory pooling and line pooling. The pooled local descriptors are then encoded with VLAD to form the video representations. In order to verify the effectiveness of the proposed framework, we conduct experiments on UCF101 and HMDB51 datasets. It achieves the accuracy of 93.78 on UCF101 which is the state-of-the-art and the accuracy of 65.62 on HMDB51 which is comparable to the state-of-the-art. | Inspired by the great success in deep image classification, a series of attempts have been made for video action recognition @cite_28 @cite_5 @cite_21 @cite_24 @cite_32 . In @cite_19 , video frames are regarded as still images to extract fully-connected layer features. Then average pooling is made across frames to get video features. @cite_21 used multi-scale pooling on the pooling @math layer to get latent concept descriptors and encoded them by VLAD for event detection. However, temporal motion information is not employed in these methods. In order to learn the motion features, @cite_24 changed the first convolutional layer to extend 2D ConvNets to videos for action recognition on relatively small datasets. @cite_32 used different time fusion strategies and trained the ConvNets on a large dataset, called Sports-1M. Recently, @cite_28 designed Two-Stream ConvNets containing spatial and temporal nets aiming to capture the discriminative appearance feature and motion feature, which competes the state-of-the-art performance. | {
"cite_N": [
"@cite_28",
"@cite_21",
"@cite_32",
"@cite_24",
"@cite_19",
"@cite_5"
],
"mid": [
"2156303437",
"1950136256",
"2016053056",
"1983364832",
"240069591",
"1944615693"
],
"abstract": [
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkits. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6 to 36.8 for the TRECVID MEDTest 14 dataset and from 34.0 to 44.6 for the TRECVID MEDTest 13 dataset.",
"Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).",
"We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.",
"This notebook paper describes our approach for the action classification task of the THUMOS Challenge 2014. We investigate and exploit the action-object relationship by capturing both motion and related objects. As local descriptors we use HOG, HOF and MBH computed along the improved dense trajectories. For video encoding we rely on Fisher vector. In addition, we employ deep net features learned from object attributes to capture action context. All actions are classified with a one-versus-rest linear SVM.",
"Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features [31] and deep-learned features [24]. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMD-B51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features [31] and deep-learned features [24]. Our method also achieves superior performance to the state of the art on these datasets."
]
} |
1511.02086 | 2219033393 | In the wake of recent advances in experimental methods in neuroscience, the ability to record in-vivo neuronal activity from awake animals has become feasible. The availability of such rich and detailed physiological measurements calls for the development of advanced data analysis tools, as commonly used techniques do not suffice to capture the spatio-temporal network complexity. In this paper, we propose a new hierarchical coupled-geometry analysis that implicitly takes into account the connectivity structures between neurons and the dynamic patterns at multiple time scales. Our approach gives rise to the joint organization of neurons and dynamic patterns in data-driven hierarchical data structures. These structures provide local to global data representations, from local partitioning of the data in flexible trees through a new multiscale metric to a global manifold embedding. The application of our techniques to in-vivo neuronal recordings demonstrate the capability of extracting neuronal activity patterns and identifying temporal trends, associated with particular behavioral events and manipulations introduced in the experiments. | Current network analysis approaches in neuroscience can be divided into two main classes @cite_38 @cite_46 . The first class comprises methods, which aim to determine functional connectivity, defined in terms of statistical dependencies between measured elements (e.g., neurons or networks), by constructing direct statistical models from the data (e.g., Granger causality, transfer entropy, point process modeling and graph based methods @cite_5 @cite_35 @cite_7 @cite_46 . The second class of methods is often based on Latent Dynamical Systems (LDS), which accommodates effective connectivity characterizing the causal relations between elements through an underlying hidden dynamical system @cite_38 @cite_45 @cite_11 . Non-linear and non-Gaussian extensions of the Kalman filter, contemporary sequential Monte Carlo methods and particle filters, have also been introduced in neuroscience @cite_9 @cite_28 . | {
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_45",
"@cite_5",
"@cite_46",
"@cite_11"
],
"mid": [
"2018305040",
"2041782669",
"2129983824",
"2126334242",
"2171077154",
"2135993484",
"1636081627",
"2090668679",
"2105522354"
],
"abstract": [
"This review considers state-of-the-art analyses of functional integration in neuronal macrocircuits. We focus on detecting and estimating directed connectivity in neuronal networks using Granger causality (GC) and dynamic causal modelling (DCM). These approaches are considered in the context of functional segregation and integration and — within functional integration — the distinction between functional and effective connectivity. We review recent developments that have enjoyed a rapid uptake in the discovery and quantification of functional brain architectures. GC and DCM have distinct and complementary ambitions that are usefully considered in relation to the detection of functional connectivity and the identification of models of effective connectivity. We highlight the basic ideas upon which they are grounded, provide a comparative evaluation and point to some outstanding issues.",
"An information theoretic measure is derived that quantifies the statistical coherence between systems evolving in time. The standard time delayed mutual information fails to distinguish information that is actually exchanged from shared information due to common history and input signals. In our new approach, these influences are excluded by appropriate conditioning of transition probabilities. The resulting transfer entropy is able to distinguish effectively driving and responding elements and to detect asymmetry in the interaction of subsystems.",
"Multiple factors simultaneously affect the spiking activity of individual neurons. Determining the effects and relative importance of these factors is a challenging problem in neurophysiology. We propose a statistical framework based on the point process likelihood function to relate a neuron's spiking probability to three typical covariates: the neuron's own spiking history, concurrent ensemble activity, and extrinsic covariates such as stimuli or behavior. The framework uses parametric models of the conditional intensity function to define a neuron's spiking probability in terms of the covariates. The discrete time likelihood function for point processes is used to carry out model fitting and model analysis. We show that, by modeling the logarithm of the conditional intensity function as a linear combination of functions of the covariates, the discrete time point process likelihood function is readily analyzed in the generalized linear model (GLM) framework. We illustrate our approach for both GLM and non-GLM likelihood functions using simulated data and multivariate single-unit activity data simultaneously recorded from the motor cortex of a monkey performing a visuomotor pursuit-tracking task. The point process framework provides a flexible, computationally efficient approach for maximum likelihood estimation, goodness-of-fit assessment, residual analysis, model selection, and neural decoding. The framework thus allows for the formulation and analysis of point process models of neural spiking activity that readily capture the simultaneous effects of multiple covariates and enables the assessment of their relative importance.",
"Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is log concave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nongaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the “hit-and-run” algorithm performed better than other MCMC methods. Using these algorithms, we show that for this latter class of priors, the posterior mean estimate can have a considerably lower average error than MAP, whereas for gaussian priors, the two estimators have roughly equal efficiency. We also address the application of MCMC methods for extracting nonmarginal properties of the posterior distribution. For example, by using MCMC to calculate the mutual information between the stimulus and response, we verify the validity of a computationally efficient Laplace approximation to this quantity for gaussian priors in a wide range of model parameters; this makes direct model-based computation of the mutual information tractable even in the case of large observed neural populations, where methods based on binning the spike train fail. Finally, we consider the effect of uncertainty in the GLM parameters on the posterior estimators.",
"We present a switching Kalman filter model for the real-time inference of hand kinematics from a population of motor cortical neurons. Firing rates are modeled as a Gaussian mixture where the mean of each Gaussian component is a linear function of hand kinematics. A \"hidden state\" models the probability of each mixture component and evolves over time in a Markov chain. The model generalizes previous encoding and decoding methods, addresses the non-Gaussian nature of firing rates, and can cope with crudely sorted neural data common in on-line prosthetic applications.",
"Our ability to move is central to everyday life. Investigating the neural control of movement in general, and the cortical control of volitional arm movements in particular, has been a major research focus in recent decades. Studies have involved primarily either attempts to account for single-neuron responses in terms of tuning for movement parameters or attempts to decode movement parameters from populations of tuned neurons. Even though this focus on encoding and decoding has led to many seminal advances, it has not produced an agreed-upon conceptual framework. Interest in understanding the underlying neural dynamics has recently increased, leading to questions such as how does the current population response determine the future population response, and to what purpose? We review how a dynamical systems perspective may help us understand why neural activity evolves the way it does, how neural activity relates to movement parameters, and how a unified conceptual framework may result.",
"Multi-electrode neurophysiological recordings produce massive quantities of data. Multivariate time series analysis provides the basic framework for analyzing the patterns of neural interactions in these data. It has long been recognized that neural interactions are directional. Being able to assess the directionality of neuronal interactions is thus a highly desired capability for understanding the cooperative nature of neural computation. Research over the last few years has shown that Granger causality is a key technique to furnish this capability. The main goal of this article is to provide an expository introduction to the concept of Granger causality. Mathematical frameworks for both bivariate Granger causality and conditional Granger causality are developed in detail with particular emphasis on their spectral representations. The technique is demonstrated in numerical examples where the exact answers of causal influences are known. It is then applied to analyze multichannel local field potentials recorded from monkeys performing a visuomotor task. Our results are shown to be physiologically interpretable and yield new insights into the dynamical organization of large-scale oscillatory cortical networks.",
"The author reviews network models of the brain, including models of both structural and functional connectivity. He discusses contributions of network models to cognitive neuroscience, as well as limitations and challenges associated with constructing and interpreting these models.",
"Neural responses in visual cortex are influenced by visual stimuli and by ongoing spiking activity in local circuits. An important challenge in computational neuroscience is to develop models that can account for both of these features in large multi-neuron recordings and to reveal how stimulus representations interact with and depend on cortical dynamics. Here we introduce a statistical model of neural population activity that integrates a nonlinear receptive field model with a latent dynamical model of ongoing cortical activity. This model captures temporal dynamics and correlations due to shared stimulus drive as well as common noise. Moreover, because the nonlinear stimulus inputs are mixed by the ongoing dynamics, the model can account for a multiple idiosyncratic receptive field shapes with a small number of nonlinear inputs to a low-dimensional dynamical model. We introduce a fast estimation method using online expectation maximization with Laplace approximations, for which inference scales linearly in both population size and recording duration. We test this model to multi-channel recordings from primary visual cortex and show that it accounts for neural tuning properties as well as cross-neural correlations."
]
} |
1511.02086 | 2219033393 | In the wake of recent advances in experimental methods in neuroscience, the ability to record in-vivo neuronal activity from awake animals has become feasible. The availability of such rich and detailed physiological measurements calls for the development of advanced data analysis tools, as commonly used techniques do not suffice to capture the spatio-temporal network complexity. In this paper, we propose a new hierarchical coupled-geometry analysis that implicitly takes into account the connectivity structures between neurons and the dynamic patterns at multiple time scales. Our approach gives rise to the joint organization of neurons and dynamic patterns in data-driven hierarchical data structures. These structures provide local to global data representations, from local partitioning of the data in flexible trees through a new multiscale metric to a global manifold embedding. The application of our techniques to in-vivo neuronal recordings demonstrate the capability of extracting neuronal activity patterns and identifying temporal trends, associated with particular behavioral events and manipulations introduced in the experiments. | These methods share significant drawbacks, as they are mostly heuristic, providing only an approximation of a largely unknown system, and their quality is often hard to assess @cite_0 . More importantly, they are all prone to the curse of dimensionality". On the one hand, designing a parametric generative model for truly complex high-dimensional data, such as neuronal behavioral recordings, requires considerable flexibility, resulting in a model with a large number of tunable parameters. On the other hand, estimating a large number of parameters, and fitting a predefined class of dynamical models to high-dimensional data, is practically infeasible, thereby leading to poor data representations. Our approach is better designed to deal with dynamical systems and aims to alleviate the shortcomings present in current analysis methods. The proposed framework deviates from common recently used in neuroscience as it makes only very general smoothness assumptions, rather than postulating a-priori specific structural models. In addition, we show that it takes into consideration the high dimensional spatio-temporal neuronal network structure. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2017539895"
],
"abstract": [
"Most sensory, cognitive and motor functions depend on the interactions of many neurons. In recent years, there has been rapid development and increasing use of technologies for recording from large numbers of neurons, either sequentially or simultaneously. A key question is what scientific insight can be gained by studying a population of recorded neurons beyond studying each neuron individually. Here, we examine three important motivations for population studies: single-trial hypotheses requiring statistical power, hypotheses of population response structure and exploratory analyses of large data sets. Many recent studies have adopted dimensionality reduction to analyze these populations and to find features that are not apparent at the level of individual neurons. We describe the dimensionality reduction methods commonly applied to population activity and offer practical advice about selecting methods and interpreting their outputs. This review is intended for experimental and computational researchers who seek to understand the role dimensionality reduction has had and can have in systems neuroscience, and who seek to apply these methods to their own data."
]
} |
1511.02425 | 2410943269 | We propose a low complexity antenna selection algorithm for low target rate users in cloud radio access networks. The algorithm consists of two phases: In the first phase, each remote radio head (RRH) determines whether to be included in a candidate set by using a predefined selection threshold. In the second phase, RRHs are randomly selected within the candidate set made in the first phase. To analyze the performance of the proposed algorithm, we model RRHs and users locations by a homogeneous Poisson point process, whereby the signal-to-interference ratio (SIR) complementary cumulative distribution function is derived. By approximating the derived expression, an approximate optimum selection threshold that maximizes the SIR coverage probability is obtained. Using the obtained threshold, we characterize the performance of the algorithm in an asymptotic regime where the RRH density goes to infinity. The obtained threshold is then modified depending on various algorithm options. A distinguishable feature of the proposed algorithm is that the algorithm complexity keeps constant independent to the RRH density, so that a user is able to connect to a network without heavy computation at baseband units. | RRH selection methods in C-RANs was proposed in @cite_8 @cite_20 @cite_32 @cite_23 @cite_4 @cite_35 @cite_24 . In @cite_8 , the downlink sum-rate was characterized as a function of a subset of RRHs and based on that a combinatorial optimization problem was formulated to find the optimal subset of RRHs. Similar to @cite_8 , in @cite_16 , an optimization problem to select the RRHs was formulated but the optimization goal was minimizing network power consumption. In @cite_19 , to reduce the complexity caused by estimating instantaneous channel and computing an uplink receiver filter, a channel matrix sparsifying algorithm was proposed for the MMSE receiver. In @cite_2 @cite_4 @cite_35 , motivated by the energy efficiency in a large distributed network, energy efficient antenna selection algorithms were proposed. In @cite_24 , a multi-mode antenna selection algorithm that chooses whether one antenna or multiple antennas for serving one user was proposed. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_8",
"@cite_32",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_16",
"@cite_20"
],
"mid": [
"",
"",
"2047468088",
"",
"2159637866",
"64867182",
"",
"1987089531",
"1928152501",
""
],
"abstract": [
"",
"",
"Large multiple-input multiple-output (MIMO) networks promise high energy efficiency, i.e., much less power is required to achieve the same capacity compared to the conventional MIMO networks if perfect channel state information (CSI) is available at the transmitter. However, in such networks, huge overhead is required to obtain full CSI especially for Frequency-Division Duplex (FDD) systems. To reduce overhead, we propose a downlink antenna selection scheme, which selects S antennas from M > S transmit antennas based on the large scale fading to serve K ≤ S users in large distributed MIMO networks employing regularized zero-forcing (RZF) precoding. In particular, we study the joint optimization of antenna selection, regularization factor, and power allocation to maximize the average weighted sum-rate. This is a mixed combinatorial and non-convex problem whose objective and constraints have no closed-form expressions. We apply random matrix theory to derive asymptotically accurate expressions for the objective and constraints. As such, the joint optimization problem is decomposed into subproblems, each of which is solved by an efficient algorithm. In addition, we derive structural solutions for some special cases and show that the capacity of very large distributed MIMO networks scales as O(KlogM) when M→∞ with K, S fixed. Simulations show that the proposed scheme achieves significant performance gain over various baselines.",
"",
"Distributed antenna systems (DAS) have been widely implemented in state-of-the art cellular communication systems to cover dead spots. Recent academic studies have shown that in addition to coverage improvements, DAS can also have potential advantages such as reduced power and increased system capacity in a single cell environment. This paper analytically quantifies downlink capacity of multicell DAS for two different transmission strategies: selection diversity (where just one or two of the distributed antennas are used) and blanket transmission (where all antennas in the cell broadcast data). Simple repeaters are a special case of our analysis. A generalized information theoretic analysis is provided to illuminate the fundamental limits of such systems in the cellular context. The results show that DAS reduces other-cell interference in a multicell environment and hence significantly improves capacity (by about 2x), with particularly large improvements for users near cell boundaries. Less obviously, from a communication theory standpoint, it is shown that selection diversity is preferable to blanket transmission in terms of achievable ergodic capacity. For blanket transmission, we show that the optimal transmission strategy is just phase steering due to the per antenna module power constraints in DAS",
"Featured by centralized processing and cloud based infrastructure, Cloud Radio Access Network (C-RAN) is a promising solution to achieving an unprecedented system capacity in future wireless cellular networks. The huge capacity gain mainly comes from the centralized and coordinated signal processing at the cloud server. However, full-scale coordination in a large-scale C-RAN requires the processing of very large channel matrices, leading to high computational complexity and channel estimation overhead. To tackle this challenge, we establish a unified theoretical framework for dynamic clustering by exploiting the near-sparsity of large C-RAN channel matrices. Based on this framework, we propose a dynamic nested clustering (DNC) algorithm that greatly improves the system scalability in terms of baseband-processing and channel-estimation complexity. With the proposed DNC algorithm, we show that the computational complexity (i.e., the computation time with serial processing) for the optimal linear detector is significantly reduced from @math to @math , where @math is the number of remote radio heads (RRHs) in the C-RAN. Moreover, the proposed DNC algorithm is also amenable to parallel processing, which further reduces the computation time to @math .",
"",
"Large-scale distributed-antenna system (L-DAS) with very large number of distributed antennas, possibly up to a few hundred antennas, is considered. A few major issues of the L-DAS, such as high latency, energy consumption, computational complexity, and large feedback (signaling) overhead, are identified. The potential capability of the L-DAS is illuminated in terms of an energy efficiency (EE) throughout the paper. We firstly and generally model the power consumption of an L-DAS, and formulate an EE maximization problem. To tackle two crucial issues, namely the huge computational complexity and large amount of feedback (signaling) information, we propose a channel-gain-based antenna selection (AS) method and an interference-based user clustering (UC) method. The original problem is then split into multiple subproblems by a cluster, and each cluster's precoding and power control are managed in parallel for high EE. Simulation results reveal that i) using all antennas for zero-forcing multiuser multiple-input multiple-output (MU-MIMO) is energy inefficient if there is nonnegligible overhead power consumption on MU-MIMO processing, and ii) increasing the number of antennas does not necessarily result in a high EE. Furthermore, the results validate and underpin the EE merit of the proposed L-DAS complied with the AS, UC, precoding, and power control by comparing with non-clustering L-DAS and colocated antenna systems.",
"A cloud radio access network (Cloud-RAN) is a network architecture that holds the promise of meeting the explosive growth of mobile data traffic. In this architecture, all the baseband signal processing is shifted to a single baseband unit (BBU) pool, which enables efficient resource allocation and interference management. Meanwhile, conventional powerful base stations can be replaced by low-cost low-power remote radio heads (RRHs), producing a green and low-cost infrastructure. However, as all the RRHs need to be connected to the BBU pool through optical transport links, the transport network power consumption becomes significant. In this paper, we propose a new framework to design a green Cloud-RAN, which is formulated as a joint RRH selection and power minimization beamforming problem. To efficiently solve this problem, we first propose a greedy selection algorithm, which is shown to provide near-optimal performance. To further reduce the complexity, a novel group sparse beamforming method is proposed by inducing the group-sparsity of beamformers using the weighted -norm minimization, where the group sparsity pattern indicates those RRHs that can be switched off. Simulation results will show that the proposed algorithms significantly reduce the network power consumption and demonstrate the importance of considering the transport link power consumption.",
""
]
} |
1511.02425 | 2410943269 | We propose a low complexity antenna selection algorithm for low target rate users in cloud radio access networks. The algorithm consists of two phases: In the first phase, each remote radio head (RRH) determines whether to be included in a candidate set by using a predefined selection threshold. In the second phase, RRHs are randomly selected within the candidate set made in the first phase. To analyze the performance of the proposed algorithm, we model RRHs and users locations by a homogeneous Poisson point process, whereby the signal-to-interference ratio (SIR) complementary cumulative distribution function is derived. By approximating the derived expression, an approximate optimum selection threshold that maximizes the SIR coverage probability is obtained. Using the obtained threshold, we characterize the performance of the algorithm in an asymptotic regime where the RRH density goes to infinity. The obtained threshold is then modified depending on various algorithm options. A distinguishable feature of the proposed algorithm is that the algorithm complexity keeps constant independent to the RRH density, so that a user is able to connect to a network without heavy computation at baseband units. | In another line of research, the signal-to-interference (SIR) coverage probability was characterized when using various cooperation techniques under an assumption of a network modeled by a homogeneous Poisson point process (PPP). For instance, in @cite_34 , the SIR coverage was analyzed in a uplink C-RAN, where a user is associated with the nearest RRHs. Assuming a downlink C-RAN where a user is served by multiple RRHs (or base stations (BSs)), the SIR coverage probability was characterized in @cite_1 @cite_12 @cite_0 @cite_26 @cite_9 . Further, by using this characterization, the optimum cluster size was obtained in @cite_1 @cite_12 @cite_0 . In @cite_21 , the SIR coverage performance of the rate-splitting with the pair-wise BS cooperation was characterized. Considering a multi-tier network, in @cite_31 @cite_30 @cite_17 , a joint transmission method for heterogeneous networks was proposed and the SIR performance was analyzed. While the benefits of cooperation was a main topic in @cite_1 @cite_12 @cite_0 @cite_26 @cite_9 @cite_7 @cite_22 @cite_30 @cite_17 , @cite_38 focused on how each user achieves the benefits of cooperation avoiding the BS conflict problem, which occurs when multiple users want to be served from the same BS. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_0",
"@cite_31",
"@cite_34",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"1660531227",
"",
"",
"",
"",
"2028015864",
"1991712798",
"",
"2058477717",
"1995920361",
"",
""
],
"abstract": [
"",
"This paper proposes a method for designing base station (BS) clusters and cluster patterns for pair-wise BS coordination. The key idea is that each BS cluster is formed by using the second-order Voronoi region, and the BS clusters are assigned to a specific cluster pattern by using edge-coloring for a graph drawn by Delaunay triangulation. The main advantage of the proposed method is that the BS selection conflict problem is prevented, while users are guaranteed to communicate with their two closest BSs in any irregular BS topology. With the proposed coordination method, analytical expressions for the rate distribution and the ergodic spectral efficiency are derived as a function of relevant system parameters in a fixed irregular network model. In a random network model with a homogeneous Poisson point process, a lower bound on the ergodic spectral efficiency is characterized. Through system level simulations, the performance of the proposed method is compared with that of conventional coordination methods: dynamic clustering and static clustering. Our major finding is that, when users are dense enough in a network, the proposed method provides the same level of coordination benefit with dynamic clustering to edge users.",
"",
"",
"",
"",
"Cooperation in cellular networks is a promising scheme to improve system performance, especially for cell-edge users. In this work, stochastic geometry is used to analyze cooperation models where the positions of base stations follow a Poisson point process distribution and where Voronoi cells define the planar areas associated with them. For the service of each user, either one or two base stations are involved. If two, these cooperate by exchange of user data and channel related information with conferencing over some backhaul link. Our framework generally allows for variable levels of channel information at the transmitters. This paper is focused on a case of limited information based on Willems' encoding. The total per-user transmission power is split between the two transmitters and a common message is encoded. The decision for a user to choose service with or without cooperation is directed by a family of geometric policies, depending on its relative position to its two closest base stations. An exact expression of the network coverage probability is derived. Numerical evaluation shows average coverage benefits of up to 17 compared to the non-cooperative case. Various other network problems of cellular cooperation, like the fully adaptive case, can be analyzed within our framework.",
"This paper characterizes the performance of coordinated beamforming with dynamic clustering. A downlink model based on stochastic geometry is put forth to analyze the performance of such a base station (BS) coordination strategy. Analytical expressions for the complementary cumulative distribution function (CCDF) of the instantaneous signal-to-interference ratio (SIR) are derived in terms of relevant system parameters, chiefly the number of BSs forming the coordination clusters, the number of antennas per BS, and the pathloss exponent. Utilizing this CCDF, with pilot overheads further incorporated into the analysis, we formulate the optimization of the BS coordination clusters for a given fading coherence. Our results indicate that: 1) coordinated beamforming is most beneficial to users that are in the outer part of their cells yet in the inner part of their coordination cluster and that 2) the optimal cluster cardinality for the typical user is small and it scales with the fading coherence. Simulation results verify the exactness of the SIR distributions derived for stochastic geometries, which are further compared with the corresponding distributions for deterministic grid networks.",
"",
"Motivated by the ongoing discussion on coordinated multipoint in wireless cellular standard bodies, this paper considers the problem of base station cooperation in the downlink of heterogeneous cellular networks. The focus of this paper is the joint transmission scenario, where an ideal backhaul network allows a set of randomly located base stations, possibly belonging to different network tiers, to jointly transmit data, to mitigate intercell interference and hence improve coverage and spectral efficiency. Using tools from stochastic geometry, an integral expression for the network coverage probability is derived in the scenario where the typical user located at an arbitrary location, i.e., the general user, receives data from a pool of base stations that are selected based on their average received power levels. An expression for the coverage probability is also derived for the typical user located at the point equidistant from three base stations, which we refer to as the worst case user. In the special case where cooperation is limited to two base stations, numerical evaluations illustrate absolute gains in coverage probability of up to 17 for the general user and 24 for the worst case user compared with the noncooperative case. It is also shown that no diversity gain is achieved using noncoherent joint transmission, whereas full diversity gain can be achieved at the receiver if the transmitting base stations have channel state information.",
"Characterizing user to remote radio head (RRH) association strategies in cloud radio access networks (C-RANs) is critical for performance optimization. In this letter, the single nearest and N-nearest RRH association strategies are presented, and the corresponding impact on the ergodic capacity of C-RANs is analyzed, where RRHs are distributed according to a stationary point process. Closed-form expressions for the ergodic capacity of the proposed RRH association strategies are derived. Simulation results demonstrate that the derived uplink closed-form capacity expressions are accurate. Furthermore, the analysis and simulation results show that the ergodic capacity gain is not linear with either the RRH density or the number of antennas per RRH. The ergodic capacity gain from the RRH density is larger than that from the number of antennas per RRH, which indicates that the association number of the RRH should not be bigger than 4 to balance the performance gain and the implementation cost.",
"",
""
]
} |
1511.02254 | 2140036133 | Learning a model of perceptual similarity from a collection of objects is a fundamental task in machine learning underlying numerous applications. A common way to learn such a model is from relative comparisons in the form of triplets: responses to queries of the form "Is object a more similar to b than it is to c?". If no consideration is made in the determination of which queries to ask, existing similarity learning methods can require a prohibitively large number of responses. In this work, we consider the problem of actively learning from triplets -finding which queries are most useful for learning. Different from previous active triplet learning approaches, we incorporate auxiliary information into our similarity model and introduce an active learning scheme to find queries that are informative for quickly learning both the relevant aspects of auxiliary data and the directly-learned similarity components. Compared to prior approaches, we show that we can learn just as effectively with much fewer queries. For evaluation, we introduce a new dataset of exhaustive triplet comparisons obtained from humans and demonstrate improved performance for different types of auxiliary information. | Recent techniques in learning similarity from relative triplet feedback can be divided into two categories: nonparametric and parametric. Nonparametric methods @cite_8 @cite_0 @cite_7 attempt to learn a model using only triplet responses as input. Typically, these methods learn an embedding of objects, or equivalently, a kernel matrix directly modeling object correlation. Parametric methods @cite_15 @cite_1 learn a distance metric parameterized by a positive semidefinite matrix, using both triplet responses and a predefined representation of the objects. @cite_14 , the authors combine these two methodologies to create a framework to learn kernel matrices. However, their framework requires solving a semidefinite program, which is prohibitively expensive in an active learning setting, where it is necessary to learn a kernel iteratively each time new responses are received. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_0",
"@cite_15"
],
"mid": [
"1568444450",
"2088247287",
"85455704",
"2010990026",
"2951342632",
""
],
"abstract": [
"In this work we consider the problem of learning a positive semidefinite kernel matrix from relative comparisons of the form: \"object A is more similar to object B than it is to C\", where comparisons are given by humans. Existing solutions to this problem assume many comparisons are provided to learn a meaningful kernel. However, this can be considered unrealistic for many real-world tasks since a large amount of human input is often costly or difficult to obtain. Because of this, only a limited number of these comparisons may be provided. We propose a new kernel learning approach that supplements the few relative comparisons with \"auxiliary\" kernels built from more easily extractable features in order to learn a kernel that more completely models the notion of similarity gained from human feedback. Our proposed formulation is a convex optimization problem that adds only minor overhead to methods that use no auxiliary information. Empirical results show that in the presence of few training relative comparisons, our method can learn kernels that generalize to more out-of-sample comparisons than methods that do not utilize auxiliary information, as well as similar metric learning methods.",
"This paper considers the problem of learning an embedding of data based on similarity triplets of the form “A is more similar to B than to C”. This learning setting is of relevance to scenarios in which we wish to model human judgements on the similarity of objects. We argue that in order to obtain a truthful embedding of the underlying data, it is insufficient for the embedding to satisfy the constraints encoded by the similarity triplets. In particular, we introduce a new technique called t-Distributed Stochastic Triplet Embedding (t-STE) that collapses similar points and repels dissimilar points in the embedding — even when all triplet constraints are satisfied. Our experimental evaluation on three data sets shows that as a result, t-STE is much better than existing techniques at revealing the underlying data structure.",
"We consider the non-metric multidimensional scaling problem: given a set of dissimilarities ∆, find an embedding whose inter-point Euclidean distances have the same ordering as ∆. In this paper, we look at a generalization of this problem in which only a set of order relations of the form dij < dkl are provided. Unlike the original problem, these order relations can be contradictory and need not be specified for all pairs of dissimilarities. We argue that this setting is more natural in some experimental settings and propose an algorithm based on convex optimization techniques to solve this problem. We apply this algorithm to human subject data from a psychophysics experiment concerning how reflectance properties are perceived. We also look at the standard NMDS problem, where a dissimilarity matrix ∆ is provided as input, and show that we can always find an orderrespecting embedding of ∆.",
"The objective of sparse metric learning is to learn a distance measure from a set of data in addition to finding a low-dimensional representation. Despite demonstrated success, the performance of existing sparse metric learning approaches is usually limited because the methods assumes certain problem relaxations or they target the SML objective indirectly. In this paper, we propose a Generalized Sparse Metric Learning method. This novel framework offers a unified view for understanding many existing sparse metric learning algorithms including the Sparse Metric Learning framework proposed in (Rosales and Fung ACM International conference on knowledge discovery and data mining (KDD), pp 367–373, 2006), the Large Margin Nearest Neighbor ( in Advances in neural information processing systems (NIPS), 2006; Weinberger and Saul in Proceedings of the twenty-fifth international conference on machine learning (ICML-2008), 2008), and the D-ranking Vector Machine (D-ranking VM) (Ouyang and Gray in Proceedings of the twenty-fifth international conference on machine learning (ICML-2008), 2008). Moreover, GSML also establishes a close relationship with the Pairwise Support Vector Machine ( in BMC Bioinform, 8, 2007). Furthermore, the proposed framework is capable of extending many current non-sparse metric learning models to their sparse versions including Relevant Component Analysis (Bar- in J Mach Learn Res, 6:937–965, 2005) and a state-of-the-art method proposed in ( Advances in neural information processing systems (NIPS), 2002). We present the detailed framework, provide theoretical justifications, build various connections with other models, and propose an iterative optimization method, making the framework both theoretically important and practically scalable for medium or large datasets. Experimental results show that this generalized framework outperforms six state-of-the-art methods with higher accuracy and significantly smaller dimensionality for seven publicly available datasets.",
"We introduce an algorithm that, given n objects, learns a similarity matrix over all n^2 pairs, from crowdsourced data alone. The algorithm samples responses to adaptively chosen triplet-based relative-similarity queries. Each query has the form \"is object 'a' more similar to 'b' or to 'c'?\" and is chosen to be maximally informative given the preceding responses. The output is an embedding of the objects into Euclidean space (like MDS); we refer to this as the \"crowd kernel.\" SVMs reveal that the crowd kernel captures prominent and subtle features across a number of domains, such as \"is striped\" among neckties and \"vowel vs. consonant\" among letters.",
""
]
} |
1511.02254 | 2140036133 | Learning a model of perceptual similarity from a collection of objects is a fundamental task in machine learning underlying numerous applications. A common way to learn such a model is from relative comparisons in the form of triplets: responses to queries of the form "Is object a more similar to b than it is to c?". If no consideration is made in the determination of which queries to ask, existing similarity learning methods can require a prohibitively large number of responses. In this work, we consider the problem of actively learning from triplets -finding which queries are most useful for learning. Different from previous active triplet learning approaches, we incorporate auxiliary information into our similarity model and introduce an active learning scheme to find queries that are informative for quickly learning both the relevant aspects of auxiliary data and the directly-learned similarity components. Compared to prior approaches, we show that we can learn just as effectively with much fewer queries. For evaluation, we introduce a new dataset of exhaustive triplet comparisons obtained from humans and demonstrate improved performance for different types of auxiliary information. | In active learning @cite_13 , the goal is to select the most informative instances in which to learn from in order to learn an accurate model from as little supervision as possible. Two methods have been developed for active triplet query selection. First, @cite_17 introduced a method which casts constraints from triplet responses as intersections of half spaces in a @math -dimensional space. Given a valid embedding, the method determines if a query is ambiguous if both possible responses result in half spaces whose intersections with the other constraints lead to non-empty cells -- i.e. it cannot infer the triplet from the current constraints. In contrast, Adaptive Crowd Kernel Learning (A-CKL) @cite_0 selects queries by how likely they are to reduce uncertainty in the positions of each object in the learned embedding. In this work, we extend A-CKL to the case where objects have auxiliary information regarding their relationships and show that by doing so we can learn a more complete model of similarity from fewer triplet responses. | {
"cite_N": [
"@cite_0",
"@cite_13",
"@cite_17"
],
"mid": [
"2951342632",
"2903158431",
"1968265719"
],
"abstract": [
"We introduce an algorithm that, given n objects, learns a similarity matrix over all n^2 pairs, from crowdsourced data alone. The algorithm samples responses to adaptively chosen triplet-based relative-similarity queries. Each query has the form \"is object 'a' more similar to 'b' or to 'c'?\" and is chosen to be maximally informative given the preceding responses. The output is an embedding of the objects into Euclidean space (like MDS); we refer to this as the \"crowd kernel.\" SVMs reveal that the crowd kernel captures prominent and subtle features across a number of domains, such as \"is striped\" among neckties and \"vowel vs. consonant\" among letters.",
"",
"Low-dimensional embedding based on non-metric data (e.g., non-metric multidimensional scaling) is a problem that arises in many applications, especially those involving human subjects. This paper investigates the problem of learning an embedding of n objects into d-dimensional Euclidean space that is consistent with pairwise comparisons of the type “object a is closer to object b than c.” While there are O(n3) such comparisons, experimental studies suggest that relatively few are necessary to uniquely determine the embedding up to the constraints imposed by all possible pairwise comparisons (i.e., the problem is typically over-constrained). This paper is concerned with quantifying the minimum number of pairwise comparisons necessary to uniquely determine an embedding up to all possible comparisons. The comparison constraints stipulate that, with respect to each object, the other objects are ranked relative to their proximity. We prove that at least Q(dn log n) pairwise comparisons are needed to determine the embedding of all n objects. The lower bounds cannot be achieved by using randomly chosen pairwise comparisons. We propose an algorithm that exploits the low-dimensional geometry in order to accurately embed objects based on relatively small number of sequentially selected pairwise comparisons and demonstrate its performance with experiments."
]
} |
1511.02283 | 2144960104 | We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https: github.com mjhucla Google_Refexp_toolbox. | Referring expression generation is a classic NLP problem (see e.g., @cite_52 @cite_17 ). Important issues include understanding what types of attributes people typically use to describe visual objects (such as color and size) @cite_51 , usage of higher-order relationships (e.g., spatial comparison) @cite_4 , and the phenomena of over and under-specification, which is also related to speaker variance @cite_37 . | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_52",
"@cite_51",
"@cite_17"
],
"mid": [
"2159149613",
"2040145958",
"2005814556",
"2250799982",
"2125447031"
],
"abstract": [
"We present a new approach to referring expression generation, casting it as a density estimation problem where the goal is to learn distributions over logical expressions identifying sets of objects in the world. Despite an extremely large space of possible expressions, we demonstrate effective learning of a globally normalized log-linear distribution. This learning is enabled by a new, multi-stage approximate inference technique that uses a pruning model to construct only the most likely logical forms. We train and evaluate the approach on a new corpus of references to sets of visual objects. Experiments show the approach is able to learn accurate models, which generate over 87 of the expressions people used. Additionally, on the previously studied special case of single object reference, we show a 35 relative error reduction over previous state of the art.",
"There is a prevailing assumption in the literature on referring expression generation that relations are used in descriptions only 'as a last resort', typically on the basis that including the second entity in the relation introduces an additional cognitive load for either speaker or hearer. In this paper, we describe an experiemt that attempts to test this assumption; we determine that, even in simple scenes where the use of relations is not strictly required in order to identify an entity, relations are in fact often used. We draw some conclusions as to what this means for the development of algorithms for the generation of referring expressions.",
"Abstract This paper describes a computer system for understanding English. The system answers questions, executes commands, and accepts information in an interactive English dialog. It is based on the belief that in modeling language understanding, we must deal in an integrated way with all of the aspects of language—syntax, semantics, and inference. The system contains a parser, a recognition grammar of English, programs for semantic analysis, and a general problem solving system. We assume that a computer cannot deal reasonably with language unless it can understand the subject it is discussing. Therefore, the program is given a detailed model of a particular domain. In addition, the system has a simple model of its own mentality. It can remember and discuss its plans and actions as well as carrying them out. It enters into a dialog with a person, responding to English sentences with actions and English replies, asking for clarification when its heuristic programs cannot understand a sentence through the use of syntactic, semantic, contextual, and physical knowledge. Knowledge in the system is represented in the form of procedures, rather than tables of rules or lists of patterns. By developing special procedural representations for syntax, semantics, and inference, we gain flexibility and power. Since each piece of knowledge can be a procedure, it can call directly on any other piece of knowledge in the system.",
"Funding for this research has been provided by SICSA and ORSAS. We thank the anonymous reviewers for useful comments on this paper.",
"This article offers a survey of computational research on referring expression generation (REG). It introduces the REG problem and describes early work in this area, discussing what basic assumptions lie behind it, and showing how its remit has widened in recent years. We discuss computational frameworks underlying REG, and demonstrate a recent trend that seeks to link REG algorithms with well-established Knowledge Representation techniques. Considerable attention is given to recent efforts at evaluating REG algorithms and the lessons that they allow us to learn. The article concludes with a discussion of the way forward in REG, focusing on references in larger and more realistic settings."
]
} |
1511.02283 | 2144960104 | We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https: github.com mjhucla Google_Refexp_toolbox. | In most of this previous work, authors have focused on small datasets of computer generated objects (or photographs of simple objects) @cite_40 @cite_10 and have not connected their text generation systems to real vision systems. However there has been recent interest in understanding referring expressions in the context of complex real world images, for which humans tend to generate longer phrases @cite_25 . Kazemzadeh al @cite_21 were the first to collect a large scale dataset of referring expressions for complex real world photos. | {
"cite_N": [
"@cite_40",
"@cite_21",
"@cite_10",
"@cite_25"
],
"mid": [
"2128248292",
"2251512949",
"15689187",
"2250351336"
],
"abstract": [
"This paper discusses the construction of a corpus for the evaluation of algorithms that generate referring expressions. It is argued that such an evaluation task requires a semantically transparent corpus, and controlled experiments are the best way to create such a resource. We address a number of issues that have arisen in an ongoing evaluation study, among which is the problem of judging the output of GRE algorithms against a human gold standard.",
"In this paper we introduce a new game to crowd-source natural language referring expressions. By designing a two player game, we can both collect and verify referring expressions directly within the game. To date, the game has produced a dataset containing 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes. This dataset is larger and more varied than previous REG datasets and allows us to study referring expressions in real-world scenes. We provide an in depth analysis of the resulting dataset. Based on our findings, we design a new optimization based model for generating referring expressions and perform experimental evaluations on 3 test sets.",
"This paper discusses the basic structures necessary for the generation of reference to objects in a visual scene. We construct a study designed to elicit naturalistic referring expressions to relatively complex objects, and find aspects of reference that have not been accounted for in work on Referring Expression Generation (REG). This includes reference to object parts, size comparisons without crisp measurements, and the use of analogies. By drawing on research in cognitive science, neurophysiology, and psycholinguistics, we begin developing the input structure and background knowledge necessary for an algorithm capable of generating the kinds of reference we observe.",
"Predicting the success of referring expressions (RE) is vital for real-world applications such as navigation systems. Traditionally, research has focused on studying Referring Expression Generation (REG) in virtual, controlled environments. In this paper, we describe a novel study of spatial references from real scenes rather than virtual. First, we investigate how humans describe objects in open, uncontrolled scenarios and compare our findings to those reported in virtual environments. We show that REs in real-world scenarios differ significantly to those in virtual worlds. Second, we propose a novel approach to quantifying image complexity when complete annotations are not present (e.g. due to poor object recognition capabitlities), and third, we present a model for success prediction of REs for objects in real scenes. Finally, we discuss implications for Natural Language Generation (NLG) systems and future directions."
]
} |
1511.02283 | 2144960104 | We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https: github.com mjhucla Google_Refexp_toolbox. | We likewise collect and evaluate against a large scale dataset. However we go beyond expression generation and jointly learn both generation and comprehension models. And where prior works have had to explicitly enumerate attribute categories such as size, color (e.g. @cite_33 ) or manually list all possible visual phrases (e.g. @cite_28 ), our deep learning-based models are able to learn to directly generate surface expressions from raw images without having to first convert to a formal object attribute representation. | {
"cite_N": [
"@cite_28",
"@cite_33"
],
"mid": [
"2049705550",
"2094937109"
],
"abstract": [
"In this paper we introduce visual phrases, complex visual composites like “a person riding a horse”. Visual phrases often display significantly reduced visual complexity compared to their component objects, because the appearance of those objects can change profoundly when they participate in relations. We introduce a dataset suitable for phrasal recognition that uses familiar PASCAL object categories, and demonstrate significant experimental gains resulting from exploiting visual phrases. We show that a visual phrase detector significantly outperforms a baseline which detects component objects and reasons about relations, even though visual phrase training sets tend to be smaller than those for objects. We argue that any multi-class detection system must decode detector outputs to produce final results; this is usually done with non-maximum suppression. We describe a novel decoding procedure that can account accurately for local context without solving difficult inference problems. We show this decoding procedure outperforms the state of the art. Finally, we show that decoding a combination of phrasal and object detectors produces real improvements in detector results.",
"Many works in computer vision attempt to solve different tasks such as object detection, scene recognition or attribute detection, either separately or as a joint problem. In recent years, there has been a growing interest in combining the results from these different tasks in order to provide a textual description of the scene. However, when describing a scene, there are many items that can be mentioned. If we include all the objects, relationships, and attributes that exist in the image, the description would be extremely long and not convey a true understanding of the image. We present a novel approach to ranking the importance of the items to be described. Specifically, we focus on the task of discriminating one image from a group of others. We investigate the factors that contribute to the most efficient description that achieves this task. We also provide a quantitative method to measure the description quality for this specific task using data from human subjects and show that our method achieves better results than baseline methods."
]
} |
1511.02283 | 2144960104 | We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https: github.com mjhucla Google_Refexp_toolbox. | Concurrently, @cite_55 propose a CNN-RNN based method that is similar to our baseline model and achieve state-of-the-art results on the ReferIt dataset @cite_21 . But they did not use the discriminative training strategy proposed in our full model. @cite_43 @cite_34 investigate the task of generating dense descriptions in an image. But their descriptions are not required to be unambiguous. | {
"cite_N": [
"@cite_43",
"@cite_55",
"@cite_21",
"@cite_34"
],
"mid": [
"2963758027",
"2963735856",
"2251512949",
"2949474740"
],
"abstract": [
"We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.",
"In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer.",
"In this paper we introduce a new game to crowd-source natural language referring expressions. By designing a two player game, we can both collect and verify referring expressions directly within the game. To date, the game has produced a dataset containing 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes. This dataset is larger and more varied than previous REG datasets and allows us to study referring expressions in real-world scenes. We provide an in depth analysis of the resulting dataset. Based on our findings, we design a new optimization based model for generating referring expressions and perform experimental evaluations on 3 test sets.",
"Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) in order to answer correctly that \"the person is riding a horse-drawn carriage\". In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 100K images where each image has an average of 21 objects, 18 attributes, and 18 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answers."
]
} |
1511.02283 | 2144960104 | We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https: github.com mjhucla Google_Refexp_toolbox. | Our methods are inspired by a long line of inquiry in joint models of images and text, primarily in the vision and learning communities @cite_31 @cite_44 @cite_24 @cite_56 @cite_26 @cite_38 @cite_27 . From a modeling perspective, our approach is closest to recent works applying RNNs and CNNs to this problem domain @cite_8 @cite_35 @cite_54 @cite_29 @cite_16 @cite_15 @cite_0 @cite_53 . The main approach in these papers is to represent the image content using the hidden activations of a CNN, and then to feed this as input to an RNN, which is trained to generate a sequence of words. | {
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_26",
"@cite_8",
"@cite_15",
"@cite_54",
"@cite_29",
"@cite_53",
"@cite_56",
"@cite_44",
"@cite_24",
"@cite_27",
"@cite_0",
"@cite_31",
"@cite_16"
],
"mid": [
"1858383477",
"2951183276",
"",
"1895577753",
"1527575280",
"1895989618",
"1931639407",
"2950178297",
"2109586012",
"68733909",
"2149557440",
"1687846465",
"2962706528",
"1897761818",
"2951805548"
],
"abstract": [
"We propose a sentence generation strategy that describes images by predicting the most likely nouns, verbs, scenes and prepositions that make up the core sentence structure. The input are initial noisy estimates of the objects and scenes detected in the image using state of the art trained detectors. As predicting actions from still images directly is unreliable, we use a language model trained from the English Gigaword corpus to obtain their estimates; together with probabilities of co-located nouns, scenes and prepositions. We use these estimates as parameters on a HMM that models the sentence generation process, with hidden nodes as sentence components and image detections as the emissions. Experimental results show that our strategy of combining vision and language produces readable and descriptive sentences compared to naive strategies that use vision alone.",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - \"blue\" + \"red\" is near images of red cars. Sample captions generated for 800 images are made available for comparison.",
"In this paper we explore the bi-directional mapping between images and their sentence-based descriptions. Critical to our approach is a recurrent neural network that attempts to dynamically build a visual representation of the scene as a caption is being generated or read. The representation automatically learns to remember long-term visual concepts. Our model is capable of both generating novel captions given an image, and reconstructing visual features given an image description. We evaluate our approach on several tasks. These include sentence generation, sentence retrieval and image retrieval. State-of-the-art results are shown for the task of generating novel image descriptions. When compared to human generated captions, our automatically generated captions are equal to or preferred by humans 21.0 of the time. Results are better than or comparable to state-of-the-art results on the image and sentence retrieval tasks for methods using similar visual features.",
"This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1 . When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34 of the time.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning.",
"The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.",
"Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.",
"Studying natural language, and especially how people describe the world around them can help us better understand the visual world. In turn, it can also help us in the quest to generate natural language that describes this world in a human manner. We present a simple yet effective approach to automatically compose image descriptions given computer vision based inputs and using web-scale n-grams. Unlike most previous work that summarizes or retrieves pre-existing text relevant to an image, our method composes sentences entirely from scratch. Experimental results indicate that it is viable to generate simple textual descriptions that are pertinent to the specific content of an image, while permitting creativity in the description -- making for more human-like annotations than previous approaches.",
"Abstract: In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html .",
"Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations."
]
} |
1511.02283 | 2144960104 | We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https: github.com mjhucla Google_Refexp_toolbox. | Most papers on image captioning have focused on describing the full image, without any spatial localization. However, we are aware of two exceptions. @cite_53 propose an attention model which is able to associate words to spatial regions within an image; however, they still focus on the full image captioning task. @cite_16 propose a model for aligning words and short phrases within sentences to bounding boxes; they then train an model to generate these short snippets given features of the bounding box. Their model is similar to our baseline model, described in (except we provide the alignment of phrases to boxes in the training set, similar to @cite_45 ). However, we show that this approach is not as good as our full model, which takes into account other potentially confusing regions in the image. | {
"cite_N": [
"@cite_53",
"@cite_16",
"@cite_45"
],
"mid": [
"2950178297",
"2951805548",
""
],
"abstract": [
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
""
]
} |
1511.02283 | 2144960104 | We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https: github.com mjhucla Google_Refexp_toolbox. | Referring expressions is related to the task of VQA (see e.g., @cite_3 @cite_7 @cite_5 @cite_32 @cite_13 ). In particular, referring expression comprehension can be turned into a VQA task where the speaker asks a question such as where in the image is the car in red?'' and the system must return a bounding box (so the answer is numerical, not linguistic). However there are philosophical and practical differences between the two tasks. A referring expression (and language in general) is about --- in our problem, the speaker is finding the optimal way to communicate to the listener, whereas VQA work typically focuses only on answering questions without regard to the listener's state of mind. Additionally, since questions tend to be more open ended in VQA, evaluating their answers can be as hard as with general image captioning, whereas evaluating the accuracy of a bounding box is easy. | {
"cite_N": [
"@cite_7",
"@cite_32",
"@cite_3",
"@cite_5",
"@cite_13"
],
"mid": [
"2151498684",
"1983927101",
"2950761309",
"2952246170",
"2963082528"
],
"abstract": [
"We propose a method for automatically answering questions about images by bringing together recent advances from natural language processing and computer vision. We combine discrete reasoning with uncertain predictions by a multi-world approach that represents uncertainty about the perceived world in a bayesian framework. Our approach can handle human questions of high complexity about realistic scenes and replies with range of answer like counts, object classes, instances and lists of them. The system is directly trained from question-answer pairs. We establish a first benchmark for this task that can be seen as a modern attempt at a visual turing test.",
"Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects.",
"We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).",
"We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus.",
"In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7 of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: http: idl.baidu.com FM-IQA.html."
]
} |
1511.02290 | 2264203206 | A recommender system is an information filtering technology which can be used to predict preference ratings of items (products, services, movies, etc) and or to output a ranking of items that are likely to be of interest to the user. Context-aware recommender systems (CARS) learn and predict the tastes and preferences of users by incorporating available contextual information in the recommendation process. One of the major challenges in context-aware recommender systems research is the lack of automatic methods to obtain contextual information for these systems. Considering this scenario, in this paper, we propose to use contextual information from topic hierarchies of the items (web pages) to improve the performance of context-aware recommender systems. The topic hierarchies are constructed by an extension of the LUPI-based Incremental Hierarchical Clustering method that considers three types of information: traditional bag-of-words (technical information), and the combination of named entities (privileged information I) with domain terms (privileged information II). We evaluated the contextual information in four context-aware recommender systems. Different weights were assigned to each type of information. The empirical results demonstrated that topic hierarchies with the combination of the two kinds of privileged information can provide better recommendations. | There are three different ways to acquire contextual information: , and @cite_22 . The explicit acquisition methods collect the contextual information through direct questions to the users. The implicit acquisition methods get contextual information directly from Web data or environment. The inference methods obtain contextual information using data an text mining techniques. In this paper, we infer context from web pages using text mining techniques. Following, some related works are presented. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2130868038"
],
"abstract": [
"The importance of contextual information has been recognized by researchers and practitioners in many disciplines, including e-commerce personalization, information retrieval, ubiquitous and mobile computing, data mining, marketing, and management. While a substantial amount of research has already been performed in the area of recommender systems, most existing approaches focus on recommending the most relevant items to users without taking into account any additional contextual information, such as time, location, or the company of other people (e.g., for watching movies or dining out). In this chapter we argue that relevant contextual information does matter in recommender systems and that it is important to take this information into account when providing recommendations. We discuss the general notion of context and how it can be modeled in recommender systems. Furthermore, we introduce three different algorithmic paradigms – contextual prefiltering, post-filtering, and modeling – for incorporating contextual information into the recommendation process, discuss the possibilities of combining several contextaware recommendation techniques into a single unifying approach, and provide a case study of one such combined approach. Finally, we present additional capabilities for context-aware recommenders and discuss important and promising directions for future research."
]
} |
1511.02290 | 2264203206 | A recommender system is an information filtering technology which can be used to predict preference ratings of items (products, services, movies, etc) and or to output a ranking of items that are likely to be of interest to the user. Context-aware recommender systems (CARS) learn and predict the tastes and preferences of users by incorporating available contextual information in the recommendation process. One of the major challenges in context-aware recommender systems research is the lack of automatic methods to obtain contextual information for these systems. Considering this scenario, in this paper, we propose to use contextual information from topic hierarchies of the items (web pages) to improve the performance of context-aware recommender systems. The topic hierarchies are constructed by an extension of the LUPI-based Incremental Hierarchical Clustering method that considers three types of information: traditional bag-of-words (technical information), and the combination of named entities (privileged information I) with domain terms (privileged information II). We evaluated the contextual information in four context-aware recommender systems. Different weights were assigned to each type of information. The empirical results demonstrated that topic hierarchies with the combination of the two kinds of privileged information can provide better recommendations. | In @cite_8 , proposed methods to extract contextual information from online reviews. They investigated available restaurant review data and four types of contextual information for a meal: the company (if the meal involved multiple people), occasion (for which occasions is the event), time (what time of the day) and location (in which city the event took place). They developed their algorithms by using existing natural language processing tools such as GATE tool http: gate.ac.uk . @cite_24 introduced a context-aware recommendation system that obtains contextual information by mining hotel reviews made by users, and combine them with user's rating historic to calculate a utility function over a set of items. They used a hotel review dataset from '' http: www.tripadvisor.com . | {
"cite_N": [
"@cite_24",
"@cite_8"
],
"mid": [
"2156103833",
"1937641174"
],
"abstract": [
"Recommender systems (RS) play an important role in many areas including targeted advertising, personalized marketing and information retrieval. In recent years, the importance of contextual information has attracted many researchers to focus on designing systems that produce personalized recommendations in accordance with available contextual information of users. Comparing with traditional RS which mainly utilize users’ rating history, the context# aware recommender systems (CARS) can result in better performance in various applications. In this paper, we present a context#aware recommender system that extracts contextual information from a textual description of user current situation and use it in combination with user ratings history to compute a utility function over the set of items. The item utility shows how much it is preferable regarding user current context. In our system, the context inference is modeled as a supervised topic#modeling problem in which the set of categories for a contextual attribute constitutes the topic set. As an example application, we used our method to mine hidden contextual data from customers' reviews for hotels and use it to produce context#aware recommendations. Our evaluations suggest that our system can help produce better recommendations in comparison to a standard kNN recommender system.",
"The potential benefit of integrating contextual information for recommendation has received much research attention recently, especially with the ever-increasing interest in mobile-based recommendation services. However, context based recommendation research is limited due to the lack of standard evaluation data with contextual information and reliable technology for extracting such information. As a result, there are no widely accepted conclusions on how, when and whether context helps. Additionally, a system often suffers from the so called cold start problem due to the lack of data for training the initial context based recommendation model. This paper proposes a novel solution to address these problems with automated information extraction techniques. We also compare several approaches for utilizing context based on a new data set collected using the proposed solution. The experimental results demonstrate that 1) IE-based techniques can help create a large scale context data with decent quality from online reviews, at least for restaurant recommendations; 2) context helps recommender systems rank items, however, does not help predict user ratings; 3) simply using context to filter items hurts recommendation performance, while a new probabilistic latent relational model we proposed helps."
]
} |
1511.02290 | 2264203206 | A recommender system is an information filtering technology which can be used to predict preference ratings of items (products, services, movies, etc) and or to output a ranking of items that are likely to be of interest to the user. Context-aware recommender systems (CARS) learn and predict the tastes and preferences of users by incorporating available contextual information in the recommendation process. One of the major challenges in context-aware recommender systems research is the lack of automatic methods to obtain contextual information for these systems. Considering this scenario, in this paper, we propose to use contextual information from topic hierarchies of the items (web pages) to improve the performance of context-aware recommender systems. The topic hierarchies are constructed by an extension of the LUPI-based Incremental Hierarchical Clustering method that considers three types of information: traditional bag-of-words (technical information), and the combination of named entities (privileged information I) with domain terms (privileged information II). We evaluated the contextual information in four context-aware recommender systems. Different weights were assigned to each type of information. The empirical results demonstrated that topic hierarchies with the combination of the two kinds of privileged information can provide better recommendations. | The methods proposed by @cite_8 and @cite_24 assume there are explicit contextual information in reviews, and such information is obtained for each review by mapping it to the labels. Therefore, they use supervised methods to learn the labels. The advantage of our proposal is that it exploits unsupervised methods to learn topic hierarchies. Therefore, it does not need a mapping between reviews and labels. | {
"cite_N": [
"@cite_24",
"@cite_8"
],
"mid": [
"2156103833",
"1937641174"
],
"abstract": [
"Recommender systems (RS) play an important role in many areas including targeted advertising, personalized marketing and information retrieval. In recent years, the importance of contextual information has attracted many researchers to focus on designing systems that produce personalized recommendations in accordance with available contextual information of users. Comparing with traditional RS which mainly utilize users’ rating history, the context# aware recommender systems (CARS) can result in better performance in various applications. In this paper, we present a context#aware recommender system that extracts contextual information from a textual description of user current situation and use it in combination with user ratings history to compute a utility function over the set of items. The item utility shows how much it is preferable regarding user current context. In our system, the context inference is modeled as a supervised topic#modeling problem in which the set of categories for a contextual attribute constitutes the topic set. As an example application, we used our method to mine hidden contextual data from customers' reviews for hotels and use it to produce context#aware recommendations. Our evaluations suggest that our system can help produce better recommendations in comparison to a standard kNN recommender system.",
"The potential benefit of integrating contextual information for recommendation has received much research attention recently, especially with the ever-increasing interest in mobile-based recommendation services. However, context based recommendation research is limited due to the lack of standard evaluation data with contextual information and reliable technology for extracting such information. As a result, there are no widely accepted conclusions on how, when and whether context helps. Additionally, a system often suffers from the so called cold start problem due to the lack of data for training the initial context based recommendation model. This paper proposes a novel solution to address these problems with automated information extraction techniques. We also compare several approaches for utilizing context based on a new data set collected using the proposed solution. The experimental results demonstrate that 1) IE-based techniques can help create a large scale context data with decent quality from online reviews, at least for restaurant recommendations; 2) context helps recommender systems rank items, however, does not help predict user ratings; 3) simply using context to filter items hurts recommendation performance, while a new probabilistic latent relational model we proposed helps."
]
} |
1511.02290 | 2264203206 | A recommender system is an information filtering technology which can be used to predict preference ratings of items (products, services, movies, etc) and or to output a ranking of items that are likely to be of interest to the user. Context-aware recommender systems (CARS) learn and predict the tastes and preferences of users by incorporating available contextual information in the recommendation process. One of the major challenges in context-aware recommender systems research is the lack of automatic methods to obtain contextual information for these systems. Considering this scenario, in this paper, we propose to use contextual information from topic hierarchies of the items (web pages) to improve the performance of context-aware recommender systems. The topic hierarchies are constructed by an extension of the LUPI-based Incremental Hierarchical Clustering method that considers three types of information: traditional bag-of-words (technical information), and the combination of named entities (privileged information I) with domain terms (privileged information II). We evaluated the contextual information in four context-aware recommender systems. Different weights were assigned to each type of information. The empirical results demonstrated that topic hierarchies with the combination of the two kinds of privileged information can provide better recommendations. | @cite_20 proposed an approach to mine future spatiotemporal events from news articles, and thus provide information for location-aware recommendation systems. A future event consists of its geographic location, temporal pattern, sentiment variable, news title, key phrase, and news article URL. Besides that, their method is unsupervised and also extracts topics. | {
"cite_N": [
"@cite_20"
],
"mid": [
"1973375669"
],
"abstract": [
"The future-related information mining task for online web resources such as news articles and blogs has been getting more attention due to its potential usefulness in supporting individual's decision making in a world where massive new data are generated daily. Instead of building a data-driven model to predict the future, one extracts future events from these massive data with high probability that they occur at a future time and a specific geographic location. Such spatiotemporal future events can be utilized by a recommender system on a location-aware device to provide localized future event suggestions. In this paper, we describe a systematic approach for mining future spatiotemporal events from web; in particular, news articles. In our application context, a valid event is defined both spatially and temporally. The mining procedure consists of two main steps: recognition and matching. For the recognition step, we identify and resolve toponyms (geographic location) and future temporal patterns. In the matching step, we perform spatiotemporal disambiguation, de-duplication, and pairing. To provide more useful future event guidance, we attach to each event a sentiment linguistic variable: positive, negative, or neutral, so that one may use these extracted event information for recommendation purposes in the form of \"avoid Event A\" or \"avoid geographic location L at time T\" or \"attend Event B\" based on the event sentiment. The identified future event consists of its geographic location, temporal pattern, sentiment variable, news title, key phrase, and news article URL. Experimental results on 3652 news articles from 21 online new sources collected over a 2-week period in the Greater Washington area are used to illustrate some of the critical steps in our mining procedure."
]
} |
1511.02290 | 2264203206 | A recommender system is an information filtering technology which can be used to predict preference ratings of items (products, services, movies, etc) and or to output a ranking of items that are likely to be of interest to the user. Context-aware recommender systems (CARS) learn and predict the tastes and preferences of users by incorporating available contextual information in the recommendation process. One of the major challenges in context-aware recommender systems research is the lack of automatic methods to obtain contextual information for these systems. Considering this scenario, in this paper, we propose to use contextual information from topic hierarchies of the items (web pages) to improve the performance of context-aware recommender systems. The topic hierarchies are constructed by an extension of the LUPI-based Incremental Hierarchical Clustering method that considers three types of information: traditional bag-of-words (technical information), and the combination of named entities (privileged information I) with domain terms (privileged information II). We evaluated the contextual information in four context-aware recommender systems. Different weights were assigned to each type of information. The empirical results demonstrated that topic hierarchies with the combination of the two kinds of privileged information can provide better recommendations. | In @cite_20 , the contextual information that extracted are related to time and local. The information of time is extracted from the timestamp of the article publication. To extract information of local, they also used named entity recognition. However, they did not evaluate the impact of the contextual information that they extracted in the recommender systems. The authors only presented some results about the evaluation of the context extraction process. | {
"cite_N": [
"@cite_20"
],
"mid": [
"1973375669"
],
"abstract": [
"The future-related information mining task for online web resources such as news articles and blogs has been getting more attention due to its potential usefulness in supporting individual's decision making in a world where massive new data are generated daily. Instead of building a data-driven model to predict the future, one extracts future events from these massive data with high probability that they occur at a future time and a specific geographic location. Such spatiotemporal future events can be utilized by a recommender system on a location-aware device to provide localized future event suggestions. In this paper, we describe a systematic approach for mining future spatiotemporal events from web; in particular, news articles. In our application context, a valid event is defined both spatially and temporally. The mining procedure consists of two main steps: recognition and matching. For the recognition step, we identify and resolve toponyms (geographic location) and future temporal patterns. In the matching step, we perform spatiotemporal disambiguation, de-duplication, and pairing. To provide more useful future event guidance, we attach to each event a sentiment linguistic variable: positive, negative, or neutral, so that one may use these extracted event information for recommendation purposes in the form of \"avoid Event A\" or \"avoid geographic location L at time T\" or \"attend Event B\" based on the event sentiment. The identified future event consists of its geographic location, temporal pattern, sentiment variable, news title, key phrase, and news article URL. Experimental results on 3652 news articles from 21 online new sources collected over a 2-week period in the Greater Washington area are used to illustrate some of the critical steps in our mining procedure."
]
} |
1511.02030 | 2170997892 | This article presents ALOJA-Machine Learning (ALOJA-ML) an extension to the ALOJA project that uses machine learning techniques to interpret Hadoop benchmark performance data and performance tuning; here we detail the approach, efficacy of the model and initial results. The ALOJA-ML project is the latest phase of a long-term collaboration between BSC and Microsoft, to automate the characterization of cost-effectiveness on Big Data deployments, focusing on Hadoop. Hadoop presents a complex execution environment, where costs and performance depends on a large number of software (SW) configurations and on multiple hardware (HW) deployment choices. Recently the ALOJA project presented an open, vendor-neutral repository, featuring over 16.000 Hadoop executions. These results are accompanied by a test bed and tools to deploy and evaluate the cost-effectiveness of the different hardware configurations, parameter tunings, and Cloud services. Despite early success within ALOJA from expert-guided benchmarking, it became clear that a genuinely comprehensive study requires automation of modeling procedures to allow a systematic analysis of large and resource-constrained search spaces. ALOJA-ML provides such an automated system allowing knowledge discovery by modeling Hadoop executions from observed benchmarks across a broad set of configuration parameters. The resulting empirically-derived performance models can be used to forecast execution behavior of various workloads; they allow a-priori prediction of the execution times for new configurations and HW choices and they offer a route to model-based anomaly detection. In addition, these models can guide the benchmarking exploration efficiently, by automatically prioritizing candidate future benchmark tests. Insights from ALOJA-ML's models can be used to reduce the operational time on clusters, speed-up the data acquisition and knowledge discovery process, and importantly, reduce running costs. In addition to learning from the methodology presented in this work, the community can benefit in general from ALOJA data-sets, framework, and derived insights to improve the design and deployment of Big Data applications. | The emergence and the adoption of Hadoop by the industry has led to various attempts at performance tuning an optimization, schemes for data distribution or partition and or adjustments in HW configurations to increase scalability or reduce running costs. For most of the deployments, execution performance can be improved at least by 3x from the default configuration @cite_0 @cite_8 . A significant challenge remains: to characterize these deployments and performance, looking for the optimal configuration in each case. There is also evidence that Hadoop performs poorly with newer and scale-up hardware @cite_18 . Scaling out in number of servers can usually improve performance, but at increased cost, power and space usage @cite_18 . These situations and available services make a case to reconsider scale-up hardware and new Cloud services from both a research and an industry perspective. | {
"cite_N": [
"@cite_0",
"@cite_18",
"@cite_8"
],
"mid": [
"",
"2141249441",
"2189125735"
],
"abstract": [
"",
"In the last decade we have seen a huge deployment of cheap clusters to run data analytics workloads. The conventional wisdom in industry and academia is that scaling out using a cluster of commodity machines is better for these workloads than scaling up by adding more resources to a single server. Popular analytics infrastructures such as Hadoop are aimed at such a cluster scale-out environment. Is this the right approach? Our measurements as well as other recent work shows that the majority of real-world analytic jobs process less than 100 GB of input, but popular infrastructures such as Hadoop MapReduce were originally designed for petascale processing. We claim that a single \"scale-up\" server can process each of these jobs and do as well or better than a cluster in terms of performance, cost, power, and server density. We present an evaluation across 11 representative Hadoop jobs that shows scale-up to be competitive in all cases and significantly better in some cases, than scale-out. To achieve that performance, we describe several modifications to the Hadoop runtime that target scale-up configuration. These changes are transparent, do not require any changes to application code, and do not compromise scale-out performance; at the same time our evaluation shows that they do significantly improve Hadoop's scale-up performance.",
"Hadoop represents a Java-based distributed computing framework that is designed to support applications that are implemented via the MapReduce programming model. In general, workload dependent Hadoop performance optimization efforts have to focus on 3 major categories: the systems HW, the systems SW, and the configuration and tuning optimization of the Hadoop infrastructure components. From a systems HW perspective, it is paramount to balance the appropriate HW components in regards to performance, scalability, and cost. It has to be pointed out that Hadoop is classified as a highly-scalable, but not necessarily as a high-performance cluster solution. From a SW perspective, the choice of the OS, the JVM, the specific Hadoop version, as well as other SW components necessary to run the Hadoop setup do have a profound impact on performance and stability of the environment. The design, setup, configuration, and tuning phase of any Hadoop project is paramount to fully benefit from the distributed Hadoop HW and SW solution stack."
]
} |
1511.02030 | 2170997892 | This article presents ALOJA-Machine Learning (ALOJA-ML) an extension to the ALOJA project that uses machine learning techniques to interpret Hadoop benchmark performance data and performance tuning; here we detail the approach, efficacy of the model and initial results. The ALOJA-ML project is the latest phase of a long-term collaboration between BSC and Microsoft, to automate the characterization of cost-effectiveness on Big Data deployments, focusing on Hadoop. Hadoop presents a complex execution environment, where costs and performance depends on a large number of software (SW) configurations and on multiple hardware (HW) deployment choices. Recently the ALOJA project presented an open, vendor-neutral repository, featuring over 16.000 Hadoop executions. These results are accompanied by a test bed and tools to deploy and evaluate the cost-effectiveness of the different hardware configurations, parameter tunings, and Cloud services. Despite early success within ALOJA from expert-guided benchmarking, it became clear that a genuinely comprehensive study requires automation of modeling procedures to allow a systematic analysis of large and resource-constrained search spaces. ALOJA-ML provides such an automated system allowing knowledge discovery by modeling Hadoop executions from observed benchmarks across a broad set of configuration parameters. The resulting empirically-derived performance models can be used to forecast execution behavior of various workloads; they allow a-priori prediction of the execution times for new configurations and HW choices and they offer a route to model-based anomaly detection. In addition, these models can guide the benchmarking exploration efficiently, by automatically prioritizing candidate future benchmark tests. Insights from ALOJA-ML's models can be used to reduce the operational time on clusters, speed-up the data acquisition and knowledge discovery process, and importantly, reduce running costs. In addition to learning from the methodology presented in this work, the community can benefit in general from ALOJA data-sets, framework, and derived insights to improve the design and deployment of Big Data applications. | Previous research focused on the need for tuning Hadoop configurations to match specific workload requirements; for example, the Starfish Project from H. Herodotou @cite_12 proposed to observe Hadoop execution behaviors and use profiles to recommend configurations for similar workload types. This approach is a useful reference for ALOJA-ML when modeling Hadoop behaviors from observed executions, in contrast, we have sought to use machine learning methods to characterize the execution behavior across a large corpus of profiling data. | {
"cite_N": [
"@cite_12"
],
"mid": [
"1834532152"
],
"abstract": [
"Timely and cost-effective analytics over “Big Data” is now a key ingredient for success in many businesses, scientific and engineering disciplines, and government endeavors. The Hadoop software stack—which consists of an extensible MapReduce execution engine, pluggable distributed storage engines, and a range of procedural to declarative interfaces—is a popular choice for big data analytics. Most practitioners of big data analytics—like computational scientists, systems researchers, and business analysts—lack the expertise to tune the system to get good performance. Unfortunately, Hadoop’s performance out of the box leaves much to be desired, leading to suboptimal use of resources, time, and money (in payas-you-go clouds). We introduce Starfish, a self-tuning system for big data analytics. Starfish builds on Hadoop while adapting to user needs and system workloads to provide good performance automatically, without any need for users to understand and manipulate the many tuning knobs in Hadoop. While Starfish’s system architecture is guided by work on self-tuning database systems, we discuss how new analysis practices over big data pose new challenges; leading us to different design choices in Starfish."
]
} |
1511.02030 | 2170997892 | This article presents ALOJA-Machine Learning (ALOJA-ML) an extension to the ALOJA project that uses machine learning techniques to interpret Hadoop benchmark performance data and performance tuning; here we detail the approach, efficacy of the model and initial results. The ALOJA-ML project is the latest phase of a long-term collaboration between BSC and Microsoft, to automate the characterization of cost-effectiveness on Big Data deployments, focusing on Hadoop. Hadoop presents a complex execution environment, where costs and performance depends on a large number of software (SW) configurations and on multiple hardware (HW) deployment choices. Recently the ALOJA project presented an open, vendor-neutral repository, featuring over 16.000 Hadoop executions. These results are accompanied by a test bed and tools to deploy and evaluate the cost-effectiveness of the different hardware configurations, parameter tunings, and Cloud services. Despite early success within ALOJA from expert-guided benchmarking, it became clear that a genuinely comprehensive study requires automation of modeling procedures to allow a systematic analysis of large and resource-constrained search spaces. ALOJA-ML provides such an automated system allowing knowledge discovery by modeling Hadoop executions from observed benchmarks across a broad set of configuration parameters. The resulting empirically-derived performance models can be used to forecast execution behavior of various workloads; they allow a-priori prediction of the execution times for new configurations and HW choices and they offer a route to model-based anomaly detection. In addition, these models can guide the benchmarking exploration efficiently, by automatically prioritizing candidate future benchmark tests. Insights from ALOJA-ML's models can be used to reduce the operational time on clusters, speed-up the data acquisition and knowledge discovery process, and importantly, reduce running costs. In addition to learning from the methodology presented in this work, the community can benefit in general from ALOJA data-sets, framework, and derived insights to improve the design and deployment of Big Data applications. | Some approaches on autonomic computing already tackled the idea of using machine learning for modeling system behavior vs. hardware or software configuration e.g., works on self-configuration like J.Wildstrom @cite_17 used machine learning for hardware reconfiguration on large data-center systems. Similarly, P.Shivam' NIMO framework @cite_15 modeled computational-science applications allowing prediction of their execution time in grid infrastructures. Such efforts are precedents of successful applications of machine learning modeling and prediction in distributed systems workload management. Here we apply such methodologies, not to directly manage the system but rather to allow users, engineers and operators to learn about their workloads in a distributed Hadoop environment. | {
"cite_N": [
"@cite_15",
"@cite_17"
],
"mid": [
"2165844343",
"2097340113"
],
"abstract": [
"We present the NIMO system that automatically learns cost models for predicting the execution time of computational-science applications running on large-scale networked utilities such as computational grids. Accurate cost models are important for selecting efficient plans for executing these applications on the utility. Computational-science applications are often scripts (written, e.g., in languages like Perl or Matlab) connected using a workflow-description language, and therefore, pose different challenges compared to modeling the execution of plans for declarative queries with well-understood semantics. NIMO generates appropriate training samples for these applications to learn fairly-accurate cost models quickly using statistical learning techniques. NIMO's approach is active and noninvasive: it actively deploys and monitors the application under varying conditions, and obtains its training data from passive instrumentation streams that require no changes to the operating system or applications. Our experiments with real scientific applications demonstrate that NIMO significantly reduces the number of training samples and the time to learn fairly-accurate cost models.",
"As computer systems continue to increase in complexity, the need for AI-based solutions is becoming more urgent. For example, high-end servers that can be partitioned into logical subsystems and repartitioned on the fly are now becoming available. This development raises the possibility of reconfiguring distributed systems online to optimize for dynamically changing workloads. However, it also introduces the need to decide when and how to reconfigure. This paper presents one approach to solving this online reconfiguration problem. In particular, we learn to identify, from only low-level system statistics, which of a set of possible configurations will lead to better performance under the current unknown workload. This approach requires no instrumentation of the system's middleware or operating systems. We introduce an agent that is able to learn this model and use it to switch configurations online as the workload varies. Our agent is fully implemented and tested on a publicly available multi-machine, multi-process distributed system (the online transaction processing benchmark TPC-W). We demonstrate that our adaptive configuration is able to outperform any single fixed configuration in the set over a variety of workloads, including gradual changes and abrupt workload spikes."
]
} |
1511.01699 | 2608141914 | Low rank matrix approximation is an important tool in machine learning. Given a data matrix, low rank approximation helps to find factors, patterns and provides concise representations for the data. Research on low rank approximation usually focus on real matrices. However, in many applications data are binary (categorical) rather than continuous. This leads to the problem of low rank approximation of binary matrix. Here we are given a @math binary matrix @math and a small integer @math . The goal is to find two binary matrices @math and @math of sizes @math and @math respectively, so that the Frobenius norm of @math is minimized. There are two models of this problem, depending on the definition of the dot product of binary vectors: The @math model and the Boolean semiring model. Unlike low rank approximation of real matrix which can be efficiently solved by Singular Value Decomposition, approximation of binary matrix is @math -hard even for @math . In this paper, we consider the problem of Column Subset Selection (CSS), in which one low rank matrix must be formed by @math columns of the data matrix. We characterize the approximation ratio of CSS for binary matrices. For @math model, we show the approximation ratio of CSS is bounded by @math and this bound is asymptotically tight. For Boolean model, it turns out that CSS is no longer sufficient to obtain a bound. We then develop a Generalized CSS (GCSS) procedure in which the columns of one low rank matrix are generated from Boolean formulas operating bitwise on columns of the data matrix. We show the approximation ratio of GCSS is bounded by @math , and the exponential dependency on @math is inherent. | @cite_17 formulate the rank-one problem as Integer Linear Programming (ILP). They showed that solving its LP relaxation yields a @math -approximation. They also improved the efficiency by reducing the LP to a Max-Flow problem using a technique developed in @cite_12 . @cite_25 observed that for the rank-one case, simply choosing the best column from @math yields a @math -approximation. | {
"cite_N": [
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"60908300",
"",
"1977877565"
],
"abstract": [
"In general, binary matrix factorization (BMF) refers to the problem of finding two binary matrices of low rank such that the difference between their matrix product and a given binary matrix is minimal. BMF has served as an important tool in dimension reduction for high-dimensional data sets with binary attributes and has been successfully employed in numerous applications. In the existing literature on BMF, the matrix product is not required to be binary. We call this unconstrained BMF (UBMF) and similarly constrained BMF (CBMF) if the matrix product is required to be binary. In this paper, we first introduce two specific variants of CBMF and discuss their relation to other dimensional reduction models such as UBMF. Then we propose alternating update procedures for CBMF. In every iteration of the proposed procedure, we solve a specific binary linear programming (BLP) problem to update the involved matrix argument. We explore the relationship between the BLP subproblem and clustering to develop an effective 2- approximation algorithm for CBMF when the underlying matrix has very low rank. The proposed algorithm can also provide a 2-approximation to rank-1 UBMF. We also develop a randomized algorithm for CBMF and estimate the approximation ratio of the solution obtained. Numerical experiments show that the proposed algorithm for UBMF finds better solutions in less CPU time than several other algorithms in the literature, and the solution obtained from CBMF is very close to that of UBMF.",
"",
"Mining discrete patterns in binary data is important for subsampling, compression, and clustering. We consider rank-one binary matrix approximations that identify the dominant patterns of the data, while preserving its discrete property. A best approximation on such data has a minimum set of inconsistent entries, i.e., mismatches between the given binary data and the approximate matrix. Due to the hardness of the problem, previous accounts of such problems employ heuristics and the resulting approximation may be far away from the optimal one. In this paper, we show that the rank-one binary matrix approximation can be reformulated as a 0-1 integer linear program (ILP). However, the ILP formulation is computationally expensive even for small-size matrices. We propose a linear program (LP) relaxation, which is shown to achieve a guaranteed approximation error bound. We further extend the proposed formulations using the regularization technique, which is commonly employed to address overfitting. The LP formulation is restricted to medium-size matrices, due to the large number of variables involved for large matrices. Interestingly, we show that the proposed approximate formulation can be transformed into an instance of the minimum s-t cut problem, which can be solved efficiently by finding maximum flows. Our empirical study shows the efficiency of the proposed algorithm based on the maximum flow. Results also confirm the established theoretical bounds."
]
} |
1511.01699 | 2608141914 | Low rank matrix approximation is an important tool in machine learning. Given a data matrix, low rank approximation helps to find factors, patterns and provides concise representations for the data. Research on low rank approximation usually focus on real matrices. However, in many applications data are binary (categorical) rather than continuous. This leads to the problem of low rank approximation of binary matrix. Here we are given a @math binary matrix @math and a small integer @math . The goal is to find two binary matrices @math and @math of sizes @math and @math respectively, so that the Frobenius norm of @math is minimized. There are two models of this problem, depending on the definition of the dot product of binary vectors: The @math model and the Boolean semiring model. Unlike low rank approximation of real matrix which can be efficiently solved by Singular Value Decomposition, approximation of binary matrix is @math -hard even for @math . In this paper, we consider the problem of Column Subset Selection (CSS), in which one low rank matrix must be formed by @math columns of the data matrix. We characterize the approximation ratio of CSS for binary matrices. For @math model, we show the approximation ratio of CSS is bounded by @math and this bound is asymptotically tight. For Boolean model, it turns out that CSS is no longer sufficient to obtain a bound. We then develop a Generalized CSS (GCSS) procedure in which the columns of one low rank matrix are generated from Boolean formulas operating bitwise on columns of the data matrix. We show the approximation ratio of GCSS is bounded by @math , and the exponential dependency on @math is inherent. | In the @math model, low rank approximation is related to the concept of matrix rigidity introduced by Valiant @cite_22 , as a method of proving lower bounds for linear circuits. For a matrix @math over @math , the rigidity @math is the smallest number of entries of @math that must be changed in order to bring its rank down to @math . Thus for a @math matrix @math , @math is the minimum approximation error possible by a product of a @math matrix @math and a @math matrix @math . By the results of Valiant, an @math matrix @math for which @math , for @math and for some constant @math cannot be computed by a linear circuit of size @math and depth @math . Such rigid matrices exists in abundance -- the challenge is to come up with an explicit construction of a family of rigid matrices. For the low rank approximation problem we are however interested in the setting of @math and we are interested in algorithms rather than explicit matrices. | {
"cite_N": [
"@cite_22"
],
"mid": [
"1530008367"
],
"abstract": [
"We have surveyed one approach to understanding complexity issues for certain easily computable natural functions. Shifting graphs have been seen to account accurately and in a unified way for the superlinear complexity of several problems for various restricted models of computation. To attack \"unrestricted\" models (in the present context combinational circuits or straight-line arithmetic programs,) a first attempt, through superconcentrators, fails to provide any lower bounds although it does give counter-examples to alternative approaches. The notion of rigidity, however, does offer for the first time a reduction of relevant computational questions to noncomputional properties. The \"reduction\" consists of the conjunction of Corollary 6.3 and Theorem 6.4 which show that \"for most sets of linear forms over the reals the stated algebraic and combinatorial reasons account for the fact that they cannot be computed in linear time and depth O(log n) simultaneously.\" We have outlined some problem areas which our preliminary results raise, and feel that further progress on most of these is humanly feasible. We would be interested in alternative approaches also."
]
} |
1511.02074 | 2952565634 | This paper initiates the study of a fundamental online problem called online balanced repartitioning. Unlike the classic graph partitioning problem, our input is an arbitrary sequence of communication requests between nodes, with patterns that may change over time. The objective is to dynamically repartition the @math nodes into @math clusters, each of size @math . Every communication request needs to be served either locally (cost 0), if the communicating nodes are collocated in the same cluster, or remotely (cost 1), using inter-cluster communication, if they are located in different clusters. The algorithm can also dynamically update the partitioning by migrating nodes between clusters at cost @math per node migration. Therefore, we are interested in online algorithms which find a good trade-off between the communication cost and the migration cost, maintaining partitions which minimize the number of inter-cluster communications. We consider settings both with and without cluster-size augmentation. For the former, we prove a lower bound which is strictly larger than @math , which highlights an interesting difference to online paging. Somewhat surprisingly, and unlike online paging, we prove that any deterministic online algorithm has a non-constant competitive ratio of at least @math , even with augmentation. Our main technical contribution is an @math -competitive algorithm for the setting with (constant) augmentation. We believe that our model finds interesting applications, e.g., in the context of datacenters, where virtual machines need to be dynamically embedded on a set of (multi-core) servers, and where machines migrations are possible, but costly. | BRP also has connections to online packing problems, where items of different sizes arriving over time need to be packed into a minimal number of bins @cite_16 @cite_9 . In contrast to these problems, however, in our case the objective is not to minimize the number of bins but rather the number of links'' between bins, given a fixed number of bins. | {
"cite_N": [
"@cite_9",
"@cite_16"
],
"mid": [
"2078659331",
"2016393589"
],
"abstract": [
"A new framework for analyzing online bin packing algorithms is presented. This framework presents a unified way of explaining the performance of algorithms based on the Harmonic approach. Within this framework, it is shown that a new algorithm, Harmonic++, has asymptotic performance ratio at most 1.58889. It is also shown that the analysis of Harmonic+1 presented in Richey [1991] is incorrect; this is a fundamental logical flaw, not an error in calculation or an omitted case. The asymptotic performance ratio of Harmonic+1 is at least 1.59217. Thus, Harmonic++ provides the best upper bound for the online bin packing problem to date.",
"Abstract In this paper, we study the 1-dimensional on-line bin packing problem. A list of pieces, each of size between zero and unity are to be packed, in order of their arrival, into a minimum number of unit-capacity bins. We present a new linear-time algorithm, the Modified Harmonic Algorithm and show, by a novel use of weighting functions, that it has an asymptotic worst-case performance ratio less than 3 2 + 1 9 + 1 222 = 1.(615) ∗ . We show that for a large class of linear-time on-line algorithms including the Modified Harmonic Algorithm, the performance ratio is at least 3 2 + 1 9 = 1.61 ∗ . Then we show how to successively construct classes of improved linear-time on-line algorithms. For any algorithm in any of these classes, the performance ratio is at least 3 2 + 1 12 = 1.583 ∗ . We present an improved algorithm called Modified Harmonic-2 with performance ratio 1.612 … and present an approach to construct linear-time on-line algorithms with better performance ratios. The analysis of Modified Harmonic-2 is omitted because it is very similar to that of Modified Harmonic, but it is substantially more complicated. Our results extend to orthogonal packings in two dimensions."
]
} |
1511.01946 | 2232444008 | Security of embedded computing systems is becoming of paramount concern as these devices become more ubiquitous, contain personal information and are increasingly used for financial transactions. Security attacks targeting embedded systems illegally gain access to the information in these devices or destroy information. The two most common types of attacks embedded systems encounter are code-injection and power analysis attacks. In the past, a number of countermeasures, both hardware- and software-based, were proposed individually against these two types of attacks. However, no single system exists to counter both of these two prominent attacks in a processor based embedded system. Therefore, this paper, for the first time, proposes a hardware software based countermeasure against both code-injection attacks and power analysis based side-channel attacks in a dual core embedded system. The proposed processor, named SecureD, has an area overhead of just 3.80 and an average runtime increase of 20.0 when compared to a standard dual processing system. The overhead were measured using a set of industry standard application benchmarks, with two encryption and five other programs. | Similar to detection techniques for code-injection attacks, countermeasures against power analysis attacks can be classified into software-based and hardware-based. Masking and current flattening are the two major software-based countermeasures. Table and data masking techniques @cite_3 @cite_26 @cite_6 @cite_41 use random values during the actual computation to prevent the processed data being exploited by the adversary. Muresan and Gebotys @cite_14 proposed a current flattening technique, where the dissipated current is flattened by adding no-op s in the code to provide sufficient discharge. @cite_17 proposed a randomized instruction injection technique, where dummy instructions were injected during the actual execution. Even though this technique has been proven effective for small number of data samples, the phase substituition techniques @cite_11 can be used to isolate the injected effects with large number of samples. Authors in @cite_18 present a comprehensive study on shuffling, which is similar to masking. It was mentioned in @cite_18 that the shuffling is effective when both the execution order and the physical resource usage are randomized. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_26",
"@cite_11",
"@cite_41",
"@cite_6",
"@cite_3",
"@cite_17"
],
"mid": [
"2140823840",
"",
"",
"2154093238",
"2105114022",
"1548656471",
"",
"2006146291"
],
"abstract": [
"Together with masking, shuffling is one of the most frequently considered solutions to improve the security of small embedded devices against side-channel attacks. In this paper, we provide a comprehensive study of this countermeasure, including improved implementations and a careful information theoretic and security analysis of its different variants. Our analyses lead to important conclusions as they moderate the strong security improvements claimed in previous works. They suggest that simplified versions of shuffling (e.g. using random start indexes) can be significantly weaker than their counterpart using full permutations. We further show with an experimental case study that such simplified versions can be as easy to attack as unprotected implementations. We finally exhibit the existence of \"indirect leakages\" in shuffled implementations that can be exploited due to the different leakage models of the different resources used in cryptographic implementations. This suggests the design of fully shuffled (and efficient) implementations, were both the execution order of the instructions and the physical resources used are randomized, as an interesting scope for further research.",
"",
"",
"Although mobile Java code is frequently executed on many wireless devices, the susceptibility to electromagnetic (EM) attacks is largely unknown. If analysis of EM waves emanating from the wireless device during a cryptographic computation does leak sufficient information, it may be possible for an attacker to reconstruct the secret key. Despite the complexities of a Java-based PDA device, this paper presents a new phase-based technique for aligning EM frames for subsequent time-based DEMA which extracts the secret key. Unlike previous research the new technique does not require perfect alignment of EM frames and demonstrates robustness in the presence of a complex embedded system. This research is important for future wireless embedded systems which will increasingly demand higher levels of security",
"This paper examines how monitoring power consumption signals might breach smart-card security. Both simple power analysis and differential power analysis attacks are investigated. The theory behind these attacks is reviewed. Then, we concentrate on showing how power analysis theory can be applied to attack an actual smart card. We examine the noise characteristics of the power signals and develop an approach to model the signal-to-noise ratio (SNR). We show how this SNR can be significantly improved using a multiple-bit attack. Experimental results against a smart-card implementation of the Data Encryption Standard demonstrate the effectiveness of our multiple-bit attack. Potential countermeasures to these attacks are also discussed.",
"Paul Kocher recently developped attacks based on the electric consumption of chips that perform cryptographic computations. Among those attacks, the \"Differential Power Analysis\" (DPA) is probably one of the most impressive and most difficult to avoid.In this paper, we present several ideas to resist this type of attack, and in particular we develop one of them which leads, interestingly, to rather precise mathematical analysis. Thus we show that it is possible to build an implementation that is provably DPA-resistant, in a \"local\" and restricted way (i.e. when - given a chip with a fixed key - the attacker only tries to detect predictable local deviations in the differentials of mean curves). We also briefly discuss some more general attacks, that are sometimes efficient whereas the \"original\" DPA fails. Many measures of consumption have been done on real chips to test the ideas presented in this paper, and some of the obtained curves are printed here.",
"",
"Side-channel attacks in general and power analysis attacks in particular are becoming a major security concern in embedded systems. Countermeasures proposed against power analysis attacks are data and table masking, current flattening, dummy instruction insertion and bit-flips balancing. All these techniques are either susceptible to multi-order power analysis attack, not sufficiently generic to cover all encryption algorithms, or burden the system with high area, run-time or energy cost. In this article, we propose a randomized instruction injection technique (RIJID) that overcomes the pitfalls of previous countermeasures. RIJID scrambles the power profile of a cryptographic application by injecting random instructions at random points of execution and therefore protects the system against power analysis attacks. Two different ways of triggering the instruction injection are also presented: (1) softRIJID, a hardware software approach, where special instructions are used in the code for triggering the injection at runtime; and (2) autoRIJID, a hardware approach, where the code injection is triggered by the processor itself via detecting signatures of encryption routines at runtime. A novel signature detection technique is also introduced for identifying encryption routines within application programs at runtime. Further, a simple obfuscation metric (RIJIDindex) based on cross-correlation that measures the scrambling provided by any code injection technique is introduced, which coarsely indicates the level of scrambling achieved. Our processor models cost 1.9p additional area in the hardware software approach and 1.2p in the hardware approach for a RISC based processor, and costs on average 29.8p in runtime and 27.1p in energy for the former and 25.0p in runtime and 28.5p in energy for the later, for industry standard cryptographic applications."
]
} |
1511.02037 | 2112689738 | This article presents the ALOJA project and its analytics tools, which leverages machine learning to interpret big data benchmark performance data and tuning. ALOJA is part of a long-term collaboration between Barcelona Supercomputing Center and Microsoft to automate the characterization of cost-effectiveness on big data deployments, currently focusing on Hadoop. Hadoop presents a complex run-time environment, where costs and performance depend on a large number of configuration choices. The ALOJA project has created an open, vendor-neutral repository, featuring over 40 000 Hadoop job executions and their performance details. The repository is accompanied by a test bed and tools to deploy and evaluate the cost-effectiveness of different hardware configurations, parameters, and cloud services. Despite early success within ALOJA, a comprehensive study requires automation of modeling procedures to allow an analysis of large and resource-constrained search spaces. The predictive analytics extension, ALOJA-ML, provides an automated system allowing knowledge discovery by modeling environments from observed executions. The resulting models can forecast execution behaviors, predicting execution times for new configurations and hardware choices. That also enables model-based anomaly detection or efficient benchmark guidance by prioritizing executions. In addition, the community can benefit from ALOJA data sets and framework to improve the design and deployment of big data applications. | As previously said, for most deployments, execution performance can be improved by at least 3 times from the default Hadoop configuration @cite_0 , and the emergence of Hadoop in the industry has led to several attempts at tuning towards performance optimization, new schemes for proper data distribution or partition, and adjustments in hardware configurations to increase scalability or reduce running costs. Characterizing these deployments is a crucial challenge towards looking for optimal configuration choices. An option to speed-up computing systems would be to scale-up or add new (and thus improved) hardware, but unfortunately there is evidence that Hadoop performs poorly in such situations, also scaling out in number of servers improve performance but at the increased costs of infrastructure, power and required storage @cite_0 . | {
"cite_N": [
"@cite_0"
],
"mid": [
"2189125735"
],
"abstract": [
"Hadoop represents a Java-based distributed computing framework that is designed to support applications that are implemented via the MapReduce programming model. In general, workload dependent Hadoop performance optimization efforts have to focus on 3 major categories: the systems HW, the systems SW, and the configuration and tuning optimization of the Hadoop infrastructure components. From a systems HW perspective, it is paramount to balance the appropriate HW components in regards to performance, scalability, and cost. It has to be pointed out that Hadoop is classified as a highly-scalable, but not necessarily as a high-performance cluster solution. From a SW perspective, the choice of the OS, the JVM, the specific Hadoop version, as well as other SW components necessary to run the Hadoop setup do have a profound impact on performance and stability of the environment. The design, setup, configuration, and tuning phase of any Hadoop project is paramount to fully benefit from the distributed Hadoop HW and SW solution stack."
]
} |
1511.02037 | 2112689738 | This article presents the ALOJA project and its analytics tools, which leverages machine learning to interpret big data benchmark performance data and tuning. ALOJA is part of a long-term collaboration between Barcelona Supercomputing Center and Microsoft to automate the characterization of cost-effectiveness on big data deployments, currently focusing on Hadoop. Hadoop presents a complex run-time environment, where costs and performance depend on a large number of configuration choices. The ALOJA project has created an open, vendor-neutral repository, featuring over 40 000 Hadoop job executions and their performance details. The repository is accompanied by a test bed and tools to deploy and evaluate the cost-effectiveness of different hardware configurations, parameters, and cloud services. Despite early success within ALOJA, a comprehensive study requires automation of modeling procedures to allow an analysis of large and resource-constrained search spaces. The predictive analytics extension, ALOJA-ML, provides an automated system allowing knowledge discovery by modeling environments from observed executions. The resulting models can forecast execution behaviors, predicting execution times for new configurations and hardware choices. That also enables model-based anomaly detection or efficient benchmark guidance by prioritizing executions. In addition, the community can benefit from ALOJA data sets and framework to improve the design and deployment of big data applications. | Previous research works, like the Starfish Project by H. @cite_9 , focus on the need for tuning Hadoop configurations to match specific workload requirements. Their work proposed to observe Hadoop execution behaviors, obtaining profiles and using them to recommend configurations for similar workloads. This approach has been a useful reference for ALOJA focusing on modeling Hadoop behaviors from observed executions, but instead of just collecting and comparing behavior features, we apply machine learning methods to characterize those behaviors across a large corpus of profiling data in our predictive analytic tools. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1834532152"
],
"abstract": [
"Timely and cost-effective analytics over “Big Data” is now a key ingredient for success in many businesses, scientific and engineering disciplines, and government endeavors. The Hadoop software stack—which consists of an extensible MapReduce execution engine, pluggable distributed storage engines, and a range of procedural to declarative interfaces—is a popular choice for big data analytics. Most practitioners of big data analytics—like computational scientists, systems researchers, and business analysts—lack the expertise to tune the system to get good performance. Unfortunately, Hadoop’s performance out of the box leaves much to be desired, leading to suboptimal use of resources, time, and money (in payas-you-go clouds). We introduce Starfish, a self-tuning system for big data analytics. Starfish builds on Hadoop while adapting to user needs and system workloads to provide good performance automatically, without any need for users to understand and manipulate the many tuning knobs in Hadoop. While Starfish’s system architecture is guided by work on self-tuning database systems, we discuss how new analysis practices over big data pose new challenges; leading us to different design choices in Starfish."
]
} |
1511.02037 | 2112689738 | This article presents the ALOJA project and its analytics tools, which leverages machine learning to interpret big data benchmark performance data and tuning. ALOJA is part of a long-term collaboration between Barcelona Supercomputing Center and Microsoft to automate the characterization of cost-effectiveness on big data deployments, currently focusing on Hadoop. Hadoop presents a complex run-time environment, where costs and performance depend on a large number of configuration choices. The ALOJA project has created an open, vendor-neutral repository, featuring over 40 000 Hadoop job executions and their performance details. The repository is accompanied by a test bed and tools to deploy and evaluate the cost-effectiveness of different hardware configurations, parameters, and cloud services. Despite early success within ALOJA, a comprehensive study requires automation of modeling procedures to allow an analysis of large and resource-constrained search spaces. The predictive analytics extension, ALOJA-ML, provides an automated system allowing knowledge discovery by modeling environments from observed executions. The resulting models can forecast execution behaviors, predicting execution times for new configurations and hardware choices. That also enables model-based anomaly detection or efficient benchmark guidance by prioritizing executions. In addition, the community can benefit from ALOJA data sets and framework to improve the design and deployment of big data applications. | The idea of using machine learning with self-configuring purposes has been seen previously in the field of autonomic computing. Works like J.Wildstrom @cite_17 proposed modeling system behaviors vs. hardware or software configurations, focusing on hardware reconfiguration on large data-center systems. Also other frameworks like the NIMO framework (P.Shivam @cite_13 ) modeled computational-science applications allowing prediction of their execution time in grid infrastructures. These efforts are precedents of successful applications of predictive analytics through machine learning in distributed systems workload management. In the ALOJA framework we are applying such methodologies to complement the exploration tools, allowing the users, engineers and operators to learn about their workloads in a distributed Hadoop environment. | {
"cite_N": [
"@cite_13",
"@cite_17"
],
"mid": [
"2165844343",
"2097340113"
],
"abstract": [
"We present the NIMO system that automatically learns cost models for predicting the execution time of computational-science applications running on large-scale networked utilities such as computational grids. Accurate cost models are important for selecting efficient plans for executing these applications on the utility. Computational-science applications are often scripts (written, e.g., in languages like Perl or Matlab) connected using a workflow-description language, and therefore, pose different challenges compared to modeling the execution of plans for declarative queries with well-understood semantics. NIMO generates appropriate training samples for these applications to learn fairly-accurate cost models quickly using statistical learning techniques. NIMO's approach is active and noninvasive: it actively deploys and monitors the application under varying conditions, and obtains its training data from passive instrumentation streams that require no changes to the operating system or applications. Our experiments with real scientific applications demonstrate that NIMO significantly reduces the number of training samples and the time to learn fairly-accurate cost models.",
"As computer systems continue to increase in complexity, the need for AI-based solutions is becoming more urgent. For example, high-end servers that can be partitioned into logical subsystems and repartitioned on the fly are now becoming available. This development raises the possibility of reconfiguring distributed systems online to optimize for dynamically changing workloads. However, it also introduces the need to decide when and how to reconfigure. This paper presents one approach to solving this online reconfiguration problem. In particular, we learn to identify, from only low-level system statistics, which of a set of possible configurations will lead to better performance under the current unknown workload. This approach requires no instrumentation of the system's middleware or operating systems. We introduce an agent that is able to learn this model and use it to switch configurations online as the workload varies. Our agent is fully implemented and tested on a publicly available multi-machine, multi-process distributed system (the online transaction processing benchmark TPC-W). We demonstrate that our adaptive configuration is able to outperform any single fixed configuration in the set over a variety of workloads, including gradual changes and abrupt workload spikes."
]
} |
1511.01821 | 2229219728 | We study the problem of constrained distributed optimization in multi-agent networks when some of the computing agents may be faulty. In this problem, the system goal is to have all the non-faulty agents collectively minimize a global objective given by weighted average of local cost functions, each of which is initially known to a non-faulty agent only. In particular, we are interested in the scenario when the computing agents are connected by an arbitrary directed communication network, some of the agents may suffer from crash faults or Byzantine faults, and the estimate of each agent is restricted to lie in a common constraint set. This problem finds its applications in social computing and distributed large-scale machine learning. The fault-tolerant multi-agent optimization problem was first formulated by Su and Vaidya, and is solved when the local functions are defined over the whole real line, and the networks are fully-connected. In this report, we consider arbitrary directed communication networks and focus on the scenario where, local estimates at the non-faulty agents are constrained, and only local communication and minimal memory carried across iterations are allowed. In particular, we generalize our previous results on fully-connected networks and unconstrained optimization to arbitrary directed networks and constrained optimization. As a byproduct, we provide a matrix representation for iterative approximate crash consensus. The matrix representation allows us to characterize the convergence rate for crash iterative consensus. | Fault-tolerant consensus @cite_24 is a special case of the optimization problem considered in this report. There is a significant body of work on fault-tolerant consensus, including @cite_32 @cite_27 @cite_5 @cite_14 @cite_15 @cite_10 . The optimization algorithms presented in this report use fault-tolerant consensus as a component. | {
"cite_N": [
"@cite_14",
"@cite_32",
"@cite_24",
"@cite_27",
"@cite_5",
"@cite_15",
"@cite_10"
],
"mid": [
"2139686511",
"1981017197",
"2126924915",
"2126906505",
"1999197902",
"2021482754",
""
],
"abstract": [
"The condition-based approach identifies sets of input vectors, called conditions, for which it is possible to design an asynchronous protocol solving a distributed problem despite process crashes. This paper establishes a direct correlation between distributed agreement problems and error-correcting codes. In particular, crash failures in distributed agreement problems correspond to erasure failures in error-correcting codes and Byzantine and value domain faults correspond to corruption errors. This correlation is exemplified by concentrating on two well-known agreement problems, namely, consensus and interactive consistency, in the context of the condition-based approach. Specifically, the paper presents the following results: first, it shows that the conditions that allow interactive consistency to be solved despite fc crashes and fc value domain faults correspond exactly to the set of error-correcting codes capable of recovering from fc erasures and fc corruptions. Second, the paper proves that consensus can be solved despite fc crash failures if the condition corresponds to a code whose Hamming distance is fc + 1 and Byzantine consensus can be solved despite fb Byzantine faults if the Hamming distance of the code is 2 fb + 1. Finally, the paper uses the above relations to establish several results in distributed agreement that are derived from known results in error-correcting codes and vice versa.",
"Abstract We define the k -SET CONSENSUS PROBLEM as an extension of the CONSENSUS problem, where each processor decides on a single value such that the set of decided values in any run is of size at most k . We require the agreement condition that all values decided upon are initial values of some processor. We show that the problem has a simple ( k −1)-resilient protocol in a totally asynchronous system. In an attempt to come up with a matching lower bound on the number of failures, we study the uncertainty condition, which requires that there must be some initial configuration from which all possible input values can be decided. We prove using a combinatorial argument that any k -resilient protocol for the k -set agreement problem would satisfy the uncertainty condition, while this is not true for any ( k −1)-resilient protocol. This result seems to strengthen the conjecture that there is no k -resilient protocol for this problem. We prove this result for a restricted class of protocols. Our motivation for studying this problem is to test whether the number of choices allowed to the processors is related to the number of faults . We hope that this will provide intuition towards achieving better bounds for more practical problems that arise in distributed computing, e.g., the renaming problem. The larger goal is to characterize the boundary between possibility and impossibility in asynchronous systems given multiple faults.",
"The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor. It is shown that the problem is solvable for, and only for, n ≥ 3 m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.",
"This paper considers a variant of the Byzantine Generals problem, in which processes start with arbitrary real values rather than Boolean values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal. Algorithms are presented to reach approximate agreement in asynchronous, as well as synchronous systems. The asynchronous agreement algorithm is an interesting contrast to a result of , who show that exact agreement with guaranteed termination is not attainable in an asynchronous system with as few as one faulty process. The algorithms work by successive approximation, with a provable convergence rate that depends on the ratio between the number of faulty processes and the total number of processes. Lower bounds on the convergence rate for algorithms of this form are proved, and the algorithms presented are shown to be optimal.",
"",
"This article introduces and explores the condition-based approach to solve the consensus problem in asynchronous systems. The approach studies conditions that identify sets of input vectors for which it is possible to solve consensus despite the occurrence of up to f process crashes. The first main result defines acceptable conditions and shows that these are exactly the conditions for which a consensus protocol exists. Two examples of realistic acceptable conditions are presented, and proved to be maximal, in the sense that they cannot be extended and remain acceptable. The second main result is a generic consensus shared-memory protocol for any acceptable condition. The protocol always guarantees agreement and validity, and terminates (at least) when the inputs satisfy the condition with which the protocol has been instantiated, or when there are no crashes. An efficient version of the protocol is then designed for the message passing model that works when f < n 2, and it is shown that no such protocol exists when f ≥ n 2. It is also shown how the protocol's safety can be traded for its liveness.",
""
]
} |
1511.01821 | 2229219728 | We study the problem of constrained distributed optimization in multi-agent networks when some of the computing agents may be faulty. In this problem, the system goal is to have all the non-faulty agents collectively minimize a global objective given by weighted average of local cost functions, each of which is initially known to a non-faulty agent only. In particular, we are interested in the scenario when the computing agents are connected by an arbitrary directed communication network, some of the agents may suffer from crash faults or Byzantine faults, and the estimate of each agent is restricted to lie in a common constraint set. This problem finds its applications in social computing and distributed large-scale machine learning. The fault-tolerant multi-agent optimization problem was first formulated by Su and Vaidya, and is solved when the local functions are defined over the whole real line, and the networks are fully-connected. In this report, we consider arbitrary directed communication networks and focus on the scenario where, local estimates at the non-faulty agents are constrained, and only local communication and minimal memory carried across iterations are allowed. In particular, we generalize our previous results on fully-connected networks and unconstrained optimization to arbitrary directed networks and constrained optimization. As a byproduct, we provide a matrix representation for iterative approximate crash consensus. The matrix representation allows us to characterize the convergence rate for crash iterative consensus. | Convex optimization, including distributed convex optimization, also has a long history @cite_6 . However, we are not aware of prior work that obtains the results presented in this report except @cite_1 @cite_30 @cite_20 . Primal and dual decomposition methods that led themselves naturally to a distributed paradigm are well-known @cite_11 . There has been significant research on a variant of distributed optimization problem @cite_29 @cite_21 @cite_28 @cite_0 , in which the global objective @math is a summation of @math convex functions, i.e, @math , with function @math being known to the @math -th agent. The need for robustness for distributed optimization problems has received some attentions recently @cite_29 @cite_16 @cite_18 @cite_1 @cite_30 @cite_19 . In particular, @cite_29 studied the impact of random communication link faults on the convergence of distributed variant of dual averaging algorithm. Specifically, each realizable link fault pattern considered in @cite_29 is assumed to admit a doubly-stochastic matrix which governs the evolution dynamics of local estimates of the optimum. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"",
"2114791779",
"2148087609",
"2044212084",
"1970063032",
"1603765807",
"2063403497",
"",
"",
"2122078451",
"2164278908"
],
"abstract": [
"",
"",
"We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimates of each agent are restricted to lie in different convex sets. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed \"projected consensus algorithm\" in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed \"projected subgradient algorithm\" which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.",
"The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent coordination, estimation in sensor networks, and large-scale machine learning. We develop and analyze distributed algorithms based on dual subgradient averaging, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis allows us to clearly separate the convergence of the optimization algorithm itself and the effects of communication dependent on the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network, and confirm this prediction's sharpness both by theoretical lower bounds and simulations for various networks. Our approach includes the cases of deterministic optimization and communication, as well as problems with stochastic optimization and or communication.",
"We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.",
"We study Byzantine fault-tolerant distributed optimization of a sum of convex (cost) functions with real-valued scalar input ouput. In particular, the goal is to optimize a global cost function @math , where @math is the set of non-faulty agents, and @math is agent @math 's local cost function, which is initially known only to agent @math . In general, when some of the agents may be Byzantine faulty, the above goal is unachievable, because the identity of the faulty agents is not necessarily known to the non-faulty agents, and the faulty agents may behave arbitrarily. Since the above global cost function cannot be optimized exactly in presence of Byzantine agents, we define a weaker version of the problem. The goal for the weaker problem is to generate an output that is an optimum of a function formed as a convex combination of local cost functions of the non-faulty agents. More precisely, for some choice of weights @math for @math such that @math and @math , the output must be an optimum of the cost function @math . Ideally, we would like @math for all @math -- however, this cannot be guaranteed due to the presence of faulty agents. In fact, we show that the maximum achievable number of nonzero weights ( @math 's) is @math , where @math is the upper bound on the number of Byzantine agents. In addition, we present algorithms that ensure that at least @math agents have weights that are bounded away from 0. We also propose a low-complexity suboptimal algorithm, which ensures that at least @math agents have weights that are bounded away from 0, where @math is the total number of agents, and @math ( @math ) is the actual number of Byzantine agents.",
"",
"Recently there has been a significant amount of research on developing consensus based algorithms for distributed optimization motivated by applications that vary from large scale machine learning to wireless sensor networks. This work describes and proves convergence of a new algorithm called Push-Sum Distributed Dual Averaging which combines a recent optimization algorithm [1] with a push-sum consensus protocol [2]. As we discuss, the use of push-sum has significant advantages. Restricting to doubly stochastic consensus protocols is not required and convergence to the true average consensus is guaranteed without knowing the stationary distribution of the update matrix in advance. Furthermore, the communication semantics of just summing the incoming information make this algorithm truly asynchronous and allow a clean analysis when varying intercommunication intervals and communication delays are modelled. We include experiments in simulation and on a small cluster to complement the theoretical analysis.",
"",
"",
"We study fault-tolerant distributed optimization of a sum of convex (cost) functions with real-valued scalar input output in the presence of crash faults or Byzantine faults. In particular, the goal is to optimize a global cost function @math , where @math is the collection of agents, and @math is agent @math 's local cost function, which is initially known only to agent @math . Since the above global cost function cannot be optimized exactly in presence of crash faults or Byzantine faults, we define two weaker versions of the problem for crash faults and Byzantine faults, respectively. When some agents may crash, the goal for the weaker problem is to generate an output that is an optimum of a function formed as @math where @math is the set of non-faulty agents, @math is the set of faulty agents (crashed agents), @math for each @math and @math is a normalization constant such that @math . We present an iterative algorithm in which each agent only needs to perform local computation, and send one message per iteration. When some agents may be Byzantine, the system cannot take full advantage of the data kept by non-faulty agents. The goal for the associated weaker problem is to generate an output that is an optimum of a function formed as @math such that @math for each @math and @math . We present an iterative algorithm, where only local computation is needed and only one message per agent is sent in each iteration, that ensures that at least @math agents have weights ( @math 's) that are lower bounded by @math .",
"Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations."
]
} |
1511.01821 | 2229219728 | We study the problem of constrained distributed optimization in multi-agent networks when some of the computing agents may be faulty. In this problem, the system goal is to have all the non-faulty agents collectively minimize a global objective given by weighted average of local cost functions, each of which is initially known to a non-faulty agent only. In particular, we are interested in the scenario when the computing agents are connected by an arbitrary directed communication network, some of the agents may suffer from crash faults or Byzantine faults, and the estimate of each agent is restricted to lie in a common constraint set. This problem finds its applications in social computing and distributed large-scale machine learning. The fault-tolerant multi-agent optimization problem was first formulated by Su and Vaidya, and is solved when the local functions are defined over the whole real line, and the networks are fully-connected. In this report, we consider arbitrary directed communication networks and focus on the scenario where, local estimates at the non-faulty agents are constrained, and only local communication and minimal memory carried across iterations are allowed. In particular, we generalize our previous results on fully-connected networks and unconstrained optimization to arbitrary directed networks and constrained optimization. As a byproduct, we provide a matrix representation for iterative approximate crash consensus. The matrix representation allows us to characterize the convergence rate for crash iterative consensus. | We considered Byzantine faults and crash faults in @cite_1 @cite_30 @cite_20 . In particular, @cite_1 @cite_30 considered Byzantine faults under synchronous systems, and @cite_20 considered both Byzantine faults and crash faults under synchronous systems, with results partially generalizable to asynchronous systems. It is showed in @cite_1 under Byzantine faults that at most @math non-faulty functions can have non-zero weights. This observation led to the formulation of Problem 2 in ). Six algorithms were proposed in @cite_1 . Algorithms with alternative structure, where only local communication is needed, is proposed in @cite_20 for crash faults and Byzantine faults, respectively. We showed in @cite_30 that when there are sufficient redundancy in the input functions (each input function is not exclusively kept by a single agent), it is possible to solve ), where the summation is taken over all input functions. In addition, a simple low-complexity iterative algorithm was proposed in @cite_30 , and a tight topological condition for the existence of such iterative algorithms is identified. | {
"cite_N": [
"@cite_30",
"@cite_1",
"@cite_20"
],
"mid": [
"",
"1970063032",
"2122078451"
],
"abstract": [
"",
"We study Byzantine fault-tolerant distributed optimization of a sum of convex (cost) functions with real-valued scalar input ouput. In particular, the goal is to optimize a global cost function @math , where @math is the set of non-faulty agents, and @math is agent @math 's local cost function, which is initially known only to agent @math . In general, when some of the agents may be Byzantine faulty, the above goal is unachievable, because the identity of the faulty agents is not necessarily known to the non-faulty agents, and the faulty agents may behave arbitrarily. Since the above global cost function cannot be optimized exactly in presence of Byzantine agents, we define a weaker version of the problem. The goal for the weaker problem is to generate an output that is an optimum of a function formed as a convex combination of local cost functions of the non-faulty agents. More precisely, for some choice of weights @math for @math such that @math and @math , the output must be an optimum of the cost function @math . Ideally, we would like @math for all @math -- however, this cannot be guaranteed due to the presence of faulty agents. In fact, we show that the maximum achievable number of nonzero weights ( @math 's) is @math , where @math is the upper bound on the number of Byzantine agents. In addition, we present algorithms that ensure that at least @math agents have weights that are bounded away from 0. We also propose a low-complexity suboptimal algorithm, which ensures that at least @math agents have weights that are bounded away from 0, where @math is the total number of agents, and @math ( @math ) is the actual number of Byzantine agents.",
"We study fault-tolerant distributed optimization of a sum of convex (cost) functions with real-valued scalar input output in the presence of crash faults or Byzantine faults. In particular, the goal is to optimize a global cost function @math , where @math is the collection of agents, and @math is agent @math 's local cost function, which is initially known only to agent @math . Since the above global cost function cannot be optimized exactly in presence of crash faults or Byzantine faults, we define two weaker versions of the problem for crash faults and Byzantine faults, respectively. When some agents may crash, the goal for the weaker problem is to generate an output that is an optimum of a function formed as @math where @math is the set of non-faulty agents, @math is the set of faulty agents (crashed agents), @math for each @math and @math is a normalization constant such that @math . We present an iterative algorithm in which each agent only needs to perform local computation, and send one message per iteration. When some agents may be Byzantine, the system cannot take full advantage of the data kept by non-faulty agents. The goal for the associated weaker problem is to generate an output that is an optimum of a function formed as @math such that @math for each @math and @math . We present an iterative algorithm, where only local computation is needed and only one message per agent is sent in each iteration, that ensures that at least @math agents have weights ( @math 's) that are lower bounded by @math ."
]
} |
1511.01821 | 2229219728 | We study the problem of constrained distributed optimization in multi-agent networks when some of the computing agents may be faulty. In this problem, the system goal is to have all the non-faulty agents collectively minimize a global objective given by weighted average of local cost functions, each of which is initially known to a non-faulty agent only. In particular, we are interested in the scenario when the computing agents are connected by an arbitrary directed communication network, some of the agents may suffer from crash faults or Byzantine faults, and the estimate of each agent is restricted to lie in a common constraint set. This problem finds its applications in social computing and distributed large-scale machine learning. The fault-tolerant multi-agent optimization problem was first formulated by Su and Vaidya, and is solved when the local functions are defined over the whole real line, and the networks are fully-connected. In this report, we consider arbitrary directed communication networks and focus on the scenario where, local estimates at the non-faulty agents are constrained, and only local communication and minimal memory carried across iterations are allowed. In particular, we generalize our previous results on fully-connected networks and unconstrained optimization to arbitrary directed networks and constrained optimization. As a byproduct, we provide a matrix representation for iterative approximate crash consensus. The matrix representation allows us to characterize the convergence rate for crash iterative consensus. | Concurrently, @cite_8 looked at a similar problem with different focuses, where the faulty agents are restricted to broadcasting their messages (sending identical messages) to their outgoing neighbors, and the global objective is simply a convex combination of local cost functions at the non-faulty agents. Their algorithm performance is equivalent to simply running iterative Byzantine consensus on the local optima. In addition, their results are based on the assumption that every update matrix has a common left eigenvector corresponding to eigenvalue 1. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2555951933"
],
"abstract": [
"We investigate the vulnerabilities of consensus-based distributed optimization protocols to nodes that deviate from the prescribed update rule (e.g., due to failures or adversarial attacks). After characterizing certain fundamental limitations on the performance of any distributed optimization algorithm in the presence of adversaries, we propose a robust consensus-based distributed optimization algorithm that is guaranteed to converge to the convex hull of the set of minimizers of the non-adversarial nodes' functions. We also study the distance-to-optimality properties of our proposed robust algorithm in terms of F-local sets of the graph. We show that finding the largest size of such sets is NP-hard."
]
} |
1511.01954 | 2149313062 | Sampling context-based object proposals is effective for recovering missed detections.Topic models can effectively model higher-order relations between objects instances.Context-based proposals are effective for spotting regions that contain objects.Object proposal generation should not be employed solely as a pre-detection step. In this paper we focus on improving object detection performance in terms of recall. We propose a post-detection stage during which we explore the image with the objective of recovering missed detections. This exploration is performed by sampling object proposals in the image. We analyse four different strategies to perform this sampling, giving special attention to strategies that exploit spatial relations between objects. In addition, we propose a novel method to discover higher-order relations between groups of objects. Experiments on the challenging KITTI dataset show that our proposed relations-based proposal generation strategies can help improving recall at the cost of a relatively low amount of object proposals. | Another related work is @cite_12 where two methods are proposed to learn spatio-temporal rules of moving agents from video sequences. This is done with the goal of learning temporal dependencies between activities and allows interpretations on the observed scene. Our method is similar to @cite_12 in that both methods perform spatial reasoning and both methods are evaluated in a street scene setting. Different from @cite_12 which aims at building scene-specific models, the models produced by our method are specific to the object classes of interest and not scene-dependent. Furthermore, while @cite_12 focuses more on motion (flow) cues, our method focuses on instance-based features (location & pose). Moreover, the method from @cite_12 requires video sequences and operates in the 2D image space while our method runs on still images and operates in the 3D space. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2067467244"
],
"abstract": [
"We present two novel methods to automatically learn spatio-temporal dependencies of moving agents in complex dynamic scenes. They allow to discover temporal rules, such as the right of way between different lanes or typical traffic light sequences. To extract them, sequences of activities need to be learned. While the first method extracts rules based on a learned topic model, the second model called DDP-HMM jointly learns co-occurring activities and their time dependencies. To this end we employ Dependent Dirichlet Processes to learn an arbitrary number of infinite Hidden Markov Models. In contrast to previous work, we build on state-of-the-art topic models that allow to automatically infer all parameters such as the optimal number of HMMs necessary to explain the rules governing a scene. The models are trained offline by Gibbs Sampling using unlabeled training data."
]
} |
1511.01966 | 2168580857 | This letter proposes to estimate low-rank matrices by formulating a convex optimization problem with nonconvex regularization. We employ parameterized nonconvex penalty functions to estimate the nonzero singular values more accurately than the nuclear norm. A closed-form solution for the global optimum of the proposed objective function (sum of data fidelity and the nonconvex regularizer) is also derived. The solution reduces to singular value thresholding method as a special case. The proposed method is demonstrated for image denoising. | Several non-convex penalty functions have been utilized for the LRMA problem : the weighted nuclear norm @cite_51 , transformed Schatten-1 (TS1) @cite_10 and the proximal p-norm @cite_3 . The use of these non-convex penalty functions makes the overall LRMA problem non-convex. As such, iterative algorithms aiming to reach a stationary point (i.e., not necessarily global optimum) of the non-convex objective function have been developed @cite_10 @cite_51 . Also, a non-iterative locally optimal solution for the LRMA problem using the proximal p-norm is reported in @cite_3 . Note that the proximal operators (see Sec. ) associated with the TS1 penalty and the proximal p-norm are not continuous for all values of the regularization parameter @math . In contrast, the proposed approach always leads to a convex problem formulation. A broader class of non-convex penalty functions was studied for the LRMA problem in @cite_6 @cite_9 @cite_18 . The iteratively reweighted nuclear norm minimization @cite_5 @cite_18 and generalized singular value thresholding @cite_9 methods provide a locally optimal solution to the LRMA problem, provided that the penalty functions satisfy certain assumptions (see Assumption A1 of @cite_5 ). | {
"cite_N": [
"@cite_18",
"@cite_9",
"@cite_6",
"@cite_3",
"@cite_51",
"@cite_5",
"@cite_10"
],
"mid": [
"1860736741",
"2075547019",
"2214177029",
"2031993275",
"2048695508",
"2963218026",
"1588962181"
],
"abstract": [
"The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm-based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to use a family of nonconvex surrogates of @math -norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then, we propose to solve the problem by an iteratively reweighted nuclear norm (IRNN) algorithm. IRNN iteratively solves a weighted singular value thresholding problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that the IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low rank matrix recovery compared with the state-of-the-art convex algorithms.",
"As surrogate functions of 0-norm, many nonconvex penalty functions have been proposed to enhance the sparse vector recovery. It is easy to extend these nonconvex penalty functions on singular values of a matrix to enhance low-rank matrix recovery. However, different from convex optimization, solving the nonconvex low-rank minimization problem is much more challenging than the nonconvex sparse minimization problem. We observe that all the existing nonconvex penalty functions are concave and monotonically increasing on [0, ∞). Thus their gradients are decreasing functions. Based on this property, we propose an Iteratively Reweighted Nuclear Norm (IRNN) algorithm to solve the nonconvex nonsmooth low-rank minimization problem. IRNN iteratively solves a Weighted Singular Value Thresholding (WSVT) problem. By setting the weight vector as the gradient of the concave penalty function, the WSVT problem has a closed form solution. In theory, we prove that IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthetic data and real images demonstrate that IRNN enhances the low-rank matrix recovery compared with state-of-the-art convex algorithms.",
"Motivated by the recent developments of nonconvex penalties in sparsity modeling, we propose a nonconvex optimization model for handing the low-rank matrix recovery problem. Different from the famous robust principal component analysis (RPCA), we suggest recovering low-rank and sparse matrices via a nonconvex loss function and a nonconvex penalty. The advantage of the nonconvex approach lies in its stronger robustness. To solve the model, we devise a majorization-minimization augmented Lagrange multiplier (MM-ALM) algorithm which finds the local optimal solutions of the proposed nonconvex model. We also provide an efficient strategy to speedup MM-ALM, which makes the running time comparable with the state-of-the-art algorithm of solving RPCA. Finally, empirical results demonstrate the superiority of our nonconvex approach over RPCA in terms of matrix recovery accuracy.",
"We develop new nonconvex approaches for matrix optimization problems involving sparsity. The heart of the methods is a new, nonconvex penalty function that is designed for efficient minimization by means of a generalized shrinkage operation. We apply this approach to the decomposition of video into low rank and sparse components, which is able to separate moving objects from the stationary background better than in the convex case. In the case of noisy data, we add a nonconvex regularization, and apply a splitting approach to decompose the optimization problem into simple, parallelizable components. The nonconvex regularization ameliorates contrast loss, thereby allowing stronger denoising without losing more signal to the residual.",
"As a convex relaxation of the low rank matrix factorization problem, the nuclear norm minimization has been attracting significant research interest in recent years. The standard nuclear norm minimization regularizes each singular value equally to pursue the convexity of the objective function. However, this greatly restricts its capability and flexibility in dealing with many practical problems (e.g., denoising), where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, where the singular values are assigned different weights. The solutions of the WNNM problem are analyzed under different weighting conditions. We then apply the proposed WNNM algorithm to image denoising by exploiting the image nonlocal self-similarity. Experimental results clearly show that the proposed WNNM algorithm outperforms many state-of-the-art denoising algorithms such as BM3D in terms of both quantitative measure and visual perception quality.",
"This work studies the Generalized Singular Value Thresholding (GSVT) operator Proxσg(·), Proxσg(B) = arg minx ∑mi=1 g(σi(X)) + 1 2 ||X - B||2F, associated with a nonconvex function g defined on the singular values of X. We prove that GSVT can be obtained by performing the proximal operator of g (denoted as Proxg(·)) on the singular values since Proxg(·) is monotone when g is lower bounded. If the nonconvex g satisfies some conditions (many popular nonconvex surrogate functions, e.g., lp-norm, 0 < p < 1, of l0-norm are special cases), a general solver to find Proxg(b) is proposed for any b ≥ 0. GSVT greatly generalizes the known Singular Value Thresholding (SVT) which is a basic subroutine in many convex low rank minimization methods. We are able to solve the nonconvex low rank minimization problem by using GSVT in place of SVT.",
"We study a non-convex low-rank promoting penalty function, the transformed Schatten-1 (TS1), and its applications in matrix completion. The TS1 penalty, as a matrix quasi-norm defined on its singular values, interpolates the rank and the nuclear norm through a nonnegative parameter a. We consider the unconstrained TS1 regularized low-rank matrix recovery problem and develop a fixed point representation for its global minimizer. The TS1 thresholding functions are in closed analytical form for all parameter values. The TS1 threshold values differ in subcritical (supercritical) parameter regime where the TS1 threshold functions are continuous (discontinuous). We propose TS1 iterative thresholding algorithms and compare them with some state-of-the-art algorithms on matrix completion test problems. For problems with known rank, a fully adaptive TS1 iterative thresholding algorithm consistently performs the best under different conditions with ground truth matrix being multivariate Gaussian at varying covariance. For problems with unknown rank, TS1 algorithms with an additional rank estimation procedure approach the level of IRucL-q which is an iterative reweighted algorithm, non-convex in nature and best in performance."
]
} |
1511.01154 | 2228548648 | Multi-modal image registration is a challenging task that is vital to fuse complementary signals for subsequent analyses. Despite much research into cost functions addressing this challenge, there exist cases in which these are ineffective. In this work, we show that (1) this is true for the registration of in-vivo Drosophila brain volumes visualizing genetically encoded calcium indicators to an nc82 atlas and (2) that machine learning based contrast synthesis can yield improvements. More specifically, the number of subjects for which the registration outright failed was greatly reduced (from 40 to 15 ) by using a synthesized image. | @cite_3 simultaneously segmented and registered histological images. The registration step minimizes the mutual information of class labels. This approach is an excellent choice when the modalities have corresponding, but differently appearing pixel classes but may have difficulties in cases where the boundaries of various labels are unclear, or when image content differs significantly. | {
"cite_N": [
"@cite_3"
],
"mid": [
"1966119235"
],
"abstract": [
"We describe an automatic method for fast registration of images with very different appearances. The images are jointly segmented into a small number of classes, the segmented images are registered, and the process is repeated. The segmentation calculates feature vectors on superpixels and then it finds a softmax classifier maximizing mutual information between class labels in the two images. For speed, the registration considers a sparse set of rectangular neighborhoods on the interfaces between classes. A triangulation is created with spatial regularization handled by pairwise spring-like terms on the edges. The optimal transformation is found globally using loopy belief propagation. Multiresolution helps to improve speed and robustness. Our main application is registering stained histological slices, which are large and differ both in the local and global appearance. We show that our method has comparable accuracy to standard pixel-based registration, while being faster and more general."
]
} |
1511.01154 | 2228548648 | Multi-modal image registration is a challenging task that is vital to fuse complementary signals for subsequent analyses. Despite much research into cost functions addressing this challenge, there exist cases in which these are ineffective. In this work, we show that (1) this is true for the registration of in-vivo Drosophila brain volumes visualizing genetically encoded calcium indicators to an nc82 atlas and (2) that machine learning based contrast synthesis can yield improvements. More specifically, the number of subjects for which the registration outright failed was greatly reduced (from 40 to 15 ) by using a synthesized image. | A different approach appears in @cite_0 where the target modality is synthesized'' from the source modality directly. Their proposed method uses a registered pair (one from each modality) of images of the same subject as an atlas.'' Their method uses a patch-based search with heuristics designed for MRI to estimate the target modality from source. They show that intra-modality registration using the result outperforms inter-modality methods. | {
"cite_N": [
"@cite_0"
],
"mid": [
"1983060851"
],
"abstract": [
"The performance of image analysis algorithms applied to magnetic resonance images is strongly influenced by the pulse sequences used to acquire the images. Algorithms are typically optimized for a targeted tissue contrast obtained from a particular implementation of a pulse sequence on a specific scanner. There are many practical situations, including multi-institution trials, rapid emergency scans, and scientific use of historical data, where the images are not acquired according to an optimal protocol or the desired tissue contrast is entirely missing. This paper introduces an image restoration technique that recovers images with both the desired tissue contrast and a normalized intensity profile. This is done using patches in the acquired images and an atlas containing patches of the acquired and desired tissue contrasts. The method is an example-based approach relying on sparse reconstruction from image patches. Its performance in demonstrated using several examples, including image intensity normalization, missing tissue contrast recovery, automatic segmentation, and multimodal registration. These examples demonstrate potential practical uses and also illustrate limitations of our approach."
]
} |
1511.01154 | 2228548648 | Multi-modal image registration is a challenging task that is vital to fuse complementary signals for subsequent analyses. Despite much research into cost functions addressing this challenge, there exist cases in which these are ineffective. In this work, we show that (1) this is true for the registration of in-vivo Drosophila brain volumes visualizing genetically encoded calcium indicators to an nc82 atlas and (2) that machine learning based contrast synthesis can yield improvements. More specifically, the number of subjects for which the registration outright failed was greatly reduced (from 40 to 15 ) by using a synthesized image. | In their survey, @cite_6 describe many other alternative approaches. Most related to our method are the approaches by @cite_5 who simulate an ultrasound image from CT using imaging physics and known tissue properties, and @cite_1 who use a mixture of experts and MRF to learn the probability of a target intensity conditioned on a source image patch. | {
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_6"
],
"mid": [
"2155634500",
"2161335770",
"2115167851"
],
"abstract": [
"Abstract The fusion of tracked ultrasound with CT has benefits for a variety of clinical applications, however extensive manual effort is usually required for correct registration. We developed new methods that allow one to simulate medical ultrasound from CT in real-time, reproducing the majority of ultrasonic imaging effects. They are combined with a robust similarity measure that assesses the correlation of a combination of signals extracted from CT with ultrasound, without knowing the influence of each signal. This serves as the foundation of a fully automatic registration, that aligns a 3D ultrasound sweep with the corresponding tomographic modality using a rigid or an affine transformation model, without any manual interaction. These techniques were evaluated in a study involving 25 patients with indeterminate lesions in liver and kidney. The clinical setup, acquisition and registration workflow is described, along with the evaluation of the registration accuracy with respect to physician-defined Ground Truth. Our new algorithm correctly registers without any manual interaction in 76 of the cases, the average RMS TRE over multiple target lesions throughout the liver is 8.1 mm.",
"The registration of multi-modal images is the process of finding a transformation which maps one image to the other according to a given similarity metric. In this paper, we introduce a novel approach for metric learning, aiming to address highly non functional correspondences through the integration of statistical regression and multi-label classification. We developed a position-invariant method that models the variations of intensities through the use of linear combinations of kernels that are able to handle intensity shifts. Such transport functions are considered as the singleton potentials of a Markov Random Field (MRF) where pair-wise connections encode smoothness as well as prior knowledge through a local neighborhood system. We use recent advances in the field of discrete optimization towards recovering the lowest potential of the designed cost function. Promising results on real data demonstrate the potentials of our approach.",
"Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: 1) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; 2) longitudinal studies, where temporal structural or anatomical changes are investigated; and 3) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner."
]
} |
1511.01281 | 2215001229 | Recently, clustering moving object trajectories kept gaining interest from both the data mining and machine learning communities. This problem, however, was studied mainly and extensively in the setting where moving objects can move freely on the euclidean space. In this paper, we study the problem of clustering trajectories of vehicles whose movement is restricted by the underlying road network. We model relations between these trajectories and road segments as a bipartite graph and we try to cluster its vertices. We demonstrate our approaches on synthetic data and show how it could be useful in inferring knowledge about the flow dynamics and the behavior of the drivers using the road network. | Approaches to trajectory clustering are mainly adaptations of existing algorithms to the case of trajectories. These include moving clusters @cite_10 , flock patterns @cite_7 , convoy patterns @cite_15 , the TRACLUS partition-and-group framework @cite_22 , etc. The aforementioned algorithms use euclidean-based similarities and distances and disregard the presence of an underlying network. Therefore, they can be used only in the case of unconstrained trajectories. The insightful idea of using a graph-based approach to cluster trajectory data was first introduced in @cite_19 . The approach is applied to free moving trajectories and considers the latter as sets of GPS points. Unlike our graph-based approaches, the authors do not rely on an underlying network as the basis of similarity calculations. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_19",
"@cite_15",
"@cite_10"
],
"mid": [
"1981398125",
"2007159514",
"2120436185",
"2028618575",
""
],
"abstract": [
"Existing trajectory clustering algorithms group similar trajectories as a whole, thus discovering common trajectories. Our key observation is that clustering trajectories as a whole could miss common sub-trajectories. Discovering common sub-trajectories is very useful in many applications, especially if we have regions of special interest for analysis. In this paper, we propose a new partition-and-group framework for clustering trajectories, which partitions a trajectory into a set of line segments, and then, groups similar line segments together into a cluster. The primary advantage of this framework is to discover common sub-trajectories from a trajectory database. Based on this partition-and-group framework, we develop a trajectory clustering algorithm TRACLUS. Our algorithm consists of two phases: partitioning and grouping. For the first phase, we present a formal trajectory partitioning algorithm using the minimum description length(MDL) principle. For the second phase, we present a density-based line-segment clustering algorithm. Experimental results demonstrate that TRACLUS correctly discovers common sub-trajectories from real trajectory data.",
"Data representing moving objects is rapidly getting more available, especially in the area of wildlife GPS tracking. It is a central belief that information is hidden in large data sets in the form of interesting patterns, where a pattern can be any configuration of some moving objects in a certain area and or during a certain time period. One of the most common spatio-temporal patterns sought after is flocks. A flock is a large enough subset of objects moving along paths close to each other for a certain pre-defined time. We give a new definition that we argue is more realistic than the previous ones, and by the use of techniques from computational geometry we present fast algorithms to detect and report flocks. The algorithms are analysed both theoretically and experimentally.",
"It is difficult to extract meaningful patterns from massive trajectory data. One of the main challenges is to characterise, compare and generalise trajectories to find overall patterns and trends. The major limitation of existing methods is that they do not consider topological relations among trajectories. This research proposes a graph-based approach that converts trajectory data to a graph-based representation and treats them as a complex network. Within the context of vehicle movements, the research develops a sequence of steps to extract representative points to reduce data redundancy, interpolate trajectories to accurately establish topological relationships among trajectories and locations, construct a graph (or matrix) representation of trajectories, apply a spatially constrained graph partitioning method to discover natural regions defined by trajectories and use the discovered regions to search and visualise trajectory clusters. Applications with a real data set shows that our new approach can effectively facilitate the understanding of spatial and spatiotemporal patterns in trajectories and discover novel patterns that existing methods cannot find.",
"We introduce a convoy query that retrieves all convoys from historical trajectories, each of which consists of a set of objects that travelled closely during a certain time period. Convoy query is useful for many applications such as carpooling and traffic jam analysis, however, limited work has been done in the database community. This study proposes three efficient methods for discovering convoys. The main novelty of our methods is to approximate original trajectories by using line simplification methods and perform the discovery process over the simplified trajectories with bounded errors. Our experimental results confirm the effectiveness and efficiency of our methods.",
""
]
} |
1511.01281 | 2215001229 | Recently, clustering moving object trajectories kept gaining interest from both the data mining and machine learning communities. This problem, however, was studied mainly and extensively in the setting where moving objects can move freely on the euclidean space. In this paper, we study the problem of clustering trajectories of vehicles whose movement is restricted by the underlying road network. We model relations between these trajectories and road segments as a bipartite graph and we try to cluster its vertices. We demonstrate our approaches on synthetic data and show how it could be useful in inferring knowledge about the flow dynamics and the behavior of the drivers using the road network. | The first attempt to study the similarity between network-constrained trajectories is reported in @cite_25 . The proposition requiers a priori knowledge of points of interest in the road network and cannot, consequently, be used in an unsupervised learning context. An extension of moving clusters to network-constrained trajectories is presented in @cite_8 . Roh et Hwang @cite_5 present a network-aware approach to clustering trajectories where the distance between trajectories in the road network is measured using shortest path calculations. A baseline algorithm, using agglomerative hierarchical clustering, as well as a more efficient algorithm, called NNCluster, are presented for the purpose of regrouping the network constrained trajectories. In @cite_1 , the authors describe an approach to discovering dense paths'' or sequences of frequently traveled segments in a road network. The approach is extended in @cite_9 to study the temporal evolution of dense paths. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_5",
"@cite_25"
],
"mid": [
"1481597889",
"2065500819",
"1502063784",
"1533876949",
"1841789789"
],
"abstract": [
"Spatial-Temporal clustering is one of the most important analysis tasks in spatial databases. Especially, in many real applications, real time data analysis such as clustering moving objects in spatial networks or traffic congestion prediction is more meaningful.Extensive method of clustering moving objects in Euclidean space is more complex and expensive. This paper proposes the scheme of clustering continuously moving objects, analyzes the fixed feature of the road network, proposes a notion of Virtual Clustering Unit (VCU) and improves on the existing algorithm. Performance analysis shows that the new scheme achieves high efficiency and accuracy for continuous clustering of moving objects in road networks.",
"Managing and mining data derived from moving objects is becoming an important issue in the last years. In this paper, we are interested in mining trajectories of moving objects such as vehicles in the road network. We propose a method for dense route discovery by clustering similar road sections according to both traffic and location in each time period. The traffic estimation is based on the collected spatiotemporal trajectories. We also propose a characterization approach of the temporal evolution of dense routes by a graph of route connection over consecutive time periods. This graph is labelled by a degree of evolution. We have implemented and tested the proposed algorithms, which have shown their effectiveness and efficiency.",
"Spatial data mining is an active topic in spatial databases. This paper proposes a new clustering method for moving object trajectories databases. It applies specifically to trajectories that only lie on a predefined network. The proposed algorithm (NETSCAN) is inspired from the well-known density based algorithms. However, it takes advantage of the network constraint to estimate the object density. Indeed, NETSCAN first computes dense paths in the network based on the moving object count, then, it clusters the sub-trajectories which are similar to the dense paths. The user can adjust the clustering result by setting a density threshold for the dense paths, and a similarity threshold within the clusters. This paper describes the proposed method. An implementation is reported, along with experimental results that show the effectiveness of our approach and the flexibility allowed by the user parameters.",
"With the advent of ubiquitous computing, we can easily acquire the locations of moving objects. This paper studies clustering problems for trajectory data that is constrained by the road network. While many trajectory clustering algorithms have been proposed, they do not consider the spatial proximity of objects across the road network. For this kind of data, we propose a new distance measure that reflects the spatial proximity of vehicle trajectories on the road network, and an efficient clustering method that reduces the number of distance computations during the clustering process. Experimental results demonstrate that our proposed method correctly identifies clusters using real-life trajectory data yet reduces the distance computations by up to 80 against the baseline algorithm.",
"In order to analyze the behavior of moving objects, a measure for determining the similarity of trajectories needs to be defined. Although research has been conducted that retrieved similar trajectories of moving objects in Euclidean space, very little research has been conducted on moving objects in the space defined by road networks. In terms of real applications, most moving objects are located in road network space rather than in Euclidean space. In this paper, we investigate the properties of similar trajectories in road network space. And we propose a method to retrieve similar trajectories based on this observation and similarity measure between trajectories on road network space. Experimental results show that this method provides not only a practical method for searching for similar trajectories but also a clustering method for trajectories."
]
} |
1511.01281 | 2215001229 | Recently, clustering moving object trajectories kept gaining interest from both the data mining and machine learning communities. This problem, however, was studied mainly and extensively in the setting where moving objects can move freely on the euclidean space. In this paper, we study the problem of clustering trajectories of vehicles whose movement is restricted by the underlying road network. We model relations between these trajectories and road segments as a bipartite graph and we try to cluster its vertices. We demonstrate our approaches on synthetic data and show how it could be useful in inferring knowledge about the flow dynamics and the behavior of the drivers using the road network. | Our approaches differ from existing propositions on two key aspects. First, the majority of existing work use density-based algorithms that require fine-tuning of their parameter values and assume that trajectories in the same cluster have a rather homogeneous density (which is rarely the case as discussed in @cite_5 ). In contrast, we opt for non-parametric algorithms that rely on robust and well defined clustering quality criteria. Secondly, existing approaches often use flat clustering, thus producing a unique level of clusters that can be overwhelming to analyse in the case of large datasets. Our propositions produce hierarchies of nested clusters that are suitable for multi-level exploration: the user can start with a small number of clusters to quickly understand the macro-organization of flow dynamics in the road network, then proceed to refining clusters of interest to reveal more details. | {
"cite_N": [
"@cite_5"
],
"mid": [
"1533876949"
],
"abstract": [
"With the advent of ubiquitous computing, we can easily acquire the locations of moving objects. This paper studies clustering problems for trajectory data that is constrained by the road network. While many trajectory clustering algorithms have been proposed, they do not consider the spatial proximity of objects across the road network. For this kind of data, we propose a new distance measure that reflects the spatial proximity of vehicle trajectories on the road network, and an efficient clustering method that reduces the number of distance computations during the clustering process. Experimental results demonstrate that our proposed method correctly identifies clusters using real-life trajectory data yet reduces the distance computations by up to 80 against the baseline algorithm."
]
} |
1511.01361 | 2280226823 | We consider the piecewise linear approximation of saddle functions of the form (f(x,y)=ax^2-by^2 ) under the (L_ ) error norm. We show that interpolating approximations are not optimal. One can get slightly smaller errors by allowing the vertices of the approximation to move away from the graph of the function. | Every smooth function can be approximated by a quadratic function in some neighborhood, and the same is true for surfaces. In this sense, our results are applicable as a model, for a smooth surface or a smooth function as the approximation gets more and more refined. This approach has been pioneered in the above-mentioned paper of Pottmann et al. @cite_3 . Our contribution is to improve the result for non-interpolating approximation of saddle surfaces. | {
"cite_N": [
"@cite_3"
],
"mid": [
"917503"
],
"abstract": [
"We study piecewise linear approximation of quadratic functions de- flned on R n . Invariance properties and canonical Caley Klein metrics that help in understanding this problem can be handled in arbitrary dimensions. However, the problem of optimal approximants in the sense that their linear pieces are of maximal size by keeping a given error tolerance, is a di-cult one. We present a detailled discussion of the case n = 2, where we can partially use results from convex geometry and discrete geometry. The case n = 3 is considerably harder, and thus just a few results can be formulated so far."
]
} |
1511.01361 | 2280226823 | We consider the piecewise linear approximation of saddle functions of the form (f(x,y)=ax^2-by^2 ) under the (L_ ) error norm. We show that interpolating approximations are not optimal. One can get slightly smaller errors by allowing the vertices of the approximation to move away from the graph of the function. | Bertram, Barnes, Hamann, Joy, Pottmann, and Wushour @cite_1 have extended this approach to an arbitrary bivariate function @math , by taking optimal local approximations on suitably defined patches and stitching'' them together at the patch boundaries. (The setting of this paper actually somewhat different: the bivariate function @math is given as a set of scattered data points.) | {
"cite_N": [
"@cite_1"
],
"mid": [
"2014084922"
],
"abstract": [
"We present an efficient algorithm to obtain a triangulated graph surface for scattered points (xiyi)T, i=1,…,n, with associated function values fi. The maximal distance between the triangulated graph surface and the function values is measured in z-direction (z=f(x,y)) and lies within a user-defined tolerance. The number of triangles is minimized locally by adapting their shapes to different second-degree least squares approximations of the underlying data. The method consists of three major steps: 1. subdividing the given discrete data set into clusters such that each cluster can be approximated by a quadratic polynomial within a prescribed tolerance; 2. optimally triangulating the graph surface of each quadratic polynomial; and 3. “stitching” the triangulations of all graph surfaces together. We also discuss modifications of the algorithm that are necessary to deal with discrete data points, without connectivity information, originating from a general two-manifold surface, i.e., a surface in three-dimensional space that is not necessarily a graph surface."
]
} |
1511.01361 | 2280226823 | We consider the piecewise linear approximation of saddle functions of the form (f(x,y)=ax^2-by^2 ) under the (L_ ) error norm. We show that interpolating approximations are not optimal. One can get slightly smaller errors by allowing the vertices of the approximation to move away from the graph of the function. | In arbitrary dimensions, the problem of optimal piecewise linear approximation has been adressed by Clarkson @cite_4 , without deriving explicit constant factors. For convex functions and convex bodies, there is a vast literature on optimal piecewise linear approximation in many variations, see for example the treatment in @cite_3 and the references given there. | {
"cite_N": [
"@cite_4",
"@cite_3"
],
"mid": [
"2052967435",
"917503"
],
"abstract": [
"This work addresses the problem of approximating a manifold by a simplicial mesh, and the related problem of building triangulations for the purpose of piecewise-linear approximation of functions. It has long been understood that the vertices of such meshes or triangulations should be \"well-distributed,\" or satisfy certain \"sampling conditions.\" This work clarifies and extends some algorithms for finding such well-distributed vertices, by showing that they can be regarded as finding e-nets or Delone sets in appropriate metric spaces. In some cases where such Delone properties were already understood, such as for meshes to approximate smooth manifolds that bound convex bodies, the upper and lower bound results are extended to more general manifolds; in particular, under some general conditions, the minimum Hausdorff distance for a mesh with n simplices to a d-manifold M is Θ((∫M√|κ(x)| n)2 d) as n ⋺ ∞, where κ(x) is the Gaussian curvature at point x ∈ M. We also relate these constructions to Dudley's approximation scheme for convex bodies, which can be interpreted as involving an e-net in a metric space whose distance function depends on surface normals.",
"We study piecewise linear approximation of quadratic functions de- flned on R n . Invariance properties and canonical Caley Klein metrics that help in understanding this problem can be handled in arbitrary dimensions. However, the problem of optimal approximants in the sense that their linear pieces are of maximal size by keeping a given error tolerance, is a di-cult one. We present a detailled discussion of the case n = 2, where we can partially use results from convex geometry and discrete geometry. The case n = 3 is considerably harder, and thus just a few results can be formulated so far."
]
} |
1511.01169 | 2964312150 | Recurrent Neural Networks, or RNNs, are powerful models that achieve exceptional performance on a plethora pattern recognition problems. However, the training of RNNs is a computationally difficult task owing to the well-known "vanishing exploding" gradient problem. Algorithms proposed for training RNNs either exploit no or limited curvature information and have cheap per-iteration complexity, or attempt to gain significant curvature information at the cost of increased per-iteration cost. The former set includes diagonally-scaled first-order methods such as Adagrad and Adam, while the latter consists of second-order algorithms like Hessian-Free Newton and K-FAC. In this paper, we present adaQN, a stochastic quasi-Newton algorithm for training RNNs. Our approach retains a low per-iteration cost while allowing for non-diagonal scaling through a stochastic L-BFGS updating scheme. The method uses a novel L-BFGS scaling initialization scheme and is judicious in storing and retaining L-BFGS curvature pairs. We present numerical experiments on two language modeling tasks and show that adaQN is competitive with popular RNN training algorithms. | There are diagonally-scaled first-order algorithms that perform well on the RNN training task. These algorithms can be interpreted as attempts to devise second-order methods via inexpensive diagonal Hessian approximations. @cite_21 allows for the independent scaling of each variable, thus partly addressing the issues arising from ill-conditioning. can be written in the general updating form by setting @math and by updating @math (which is a diagonal matrix) as where @math is used to prevent numerical instability arising from dividing by small quantities. Another first-order stochastic method that is known to perform well empirically in RNN training is @cite_8 . The update, which is a combination of @cite_26 and momentum, can be represented as follows in the form of , | {
"cite_N": [
"@cite_21",
"@cite_26",
"@cite_8"
],
"mid": [
"2146502635",
"",
"1522301498"
],
"abstract": [
"We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms.",
"",
"We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm."
]
} |
1511.01169 | 2964312150 | Recurrent Neural Networks, or RNNs, are powerful models that achieve exceptional performance on a plethora pattern recognition problems. However, the training of RNNs is a computationally difficult task owing to the well-known "vanishing exploding" gradient problem. Algorithms proposed for training RNNs either exploit no or limited curvature information and have cheap per-iteration complexity, or attempt to gain significant curvature information at the cost of increased per-iteration cost. The former set includes diagonally-scaled first-order methods such as Adagrad and Adam, while the latter consists of second-order algorithms like Hessian-Free Newton and K-FAC. In this paper, we present adaQN, a stochastic quasi-Newton algorithm for training RNNs. Our approach retains a low per-iteration cost while allowing for non-diagonal scaling through a stochastic L-BFGS updating scheme. The method uses a novel L-BFGS scaling initialization scheme and is judicious in storing and retaining L-BFGS curvature pairs. We present numerical experiments on two language modeling tasks and show that adaQN is competitive with popular RNN training algorithms. | Let us first consider the Hessian-Free Newton methods (HF) proposed in @cite_15 @cite_4 . These methods can be represented in the form of by setting @math to be an approximation to the inverse of the Hessian matrix ( @math ), as described below, with the circumstantial use of momentum to improve convergence. HF is a second-order optimization method that has two major ingredients: (i) it implicitly creates and solves quadratic models using matrix-vector products with the Gauss-Newton matrix obtained using the Pearlmutter trick'' and (ii) it uses the Conjugate Gradient method (CG) for solving the sub-problems inexactly. Recently, @cite_17 proposed K-FAC, a method that computes a second-order step by constructing an invertible approximation of a neural networks' Fisher information matrix in an online fashion. The authors claim that the increased quality of the step offsets the increase in the per-iteration cost of the algorithm. | {
"cite_N": [
"@cite_15",
"@cite_4",
"@cite_17"
],
"mid": [
"196761320",
"",
"2155894447"
],
"abstract": [
"We develop a 2nd-order optimization method based on the \"Hessian-free\" approach, and apply it to training deep auto-encoders. Without using pre-training, we obtain results superior to those reported by Hinton & Salakhutdinov (2006) on the same tasks they considered. Our method is practical, easy to use, scales nicely to very large datasets, and isn't limited in applicability to auto-encoders, or any specific model class. We also discuss the issue of \"pathological curvature\" as a possible explanation for the difficulty of deep-learning and how 2nd-order optimization, and our method in particular, effectively deals with it.",
"",
"We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-Factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix."
]
} |
1511.01169 | 2964312150 | Recurrent Neural Networks, or RNNs, are powerful models that achieve exceptional performance on a plethora pattern recognition problems. However, the training of RNNs is a computationally difficult task owing to the well-known "vanishing exploding" gradient problem. Algorithms proposed for training RNNs either exploit no or limited curvature information and have cheap per-iteration complexity, or attempt to gain significant curvature information at the cost of increased per-iteration cost. The former set includes diagonally-scaled first-order methods such as Adagrad and Adam, while the latter consists of second-order algorithms like Hessian-Free Newton and K-FAC. In this paper, we present adaQN, a stochastic quasi-Newton algorithm for training RNNs. Our approach retains a low per-iteration cost while allowing for non-diagonal scaling through a stochastic L-BFGS updating scheme. The method uses a novel L-BFGS scaling initialization scheme and is judicious in storing and retaining L-BFGS curvature pairs. We present numerical experiments on two language modeling tasks and show that adaQN is competitive with popular RNN training algorithms. | Recently, several stochastic quasi-Newton algorithms have been developed for large-scale machine learning problems: oLBFGS @cite_6 @cite_22 , RES @cite_23 , SDBFGS @cite_13 , SFO @cite_25 and SQN @cite_20 . These methods can be represented in the form of by setting @math and using a quasi-Newton approximation for the matrix @math . The methods enumerated above differ in three major aspects: (i) the update rule for the curvature pairs used in the computation of the quasi-Newton matrix, (ii) the frequency of updating, and (iii) the applicability to non-convex problems. With the exception of SDBFGS, all aforementioned methods have been designed to solve convex optimization problems. In all these methods, careful attention must be taken to monitor the quality of the curvature information that is used. In deterministic optimization, the (L-)BFGS curvature information is constructed by first computing the iterate and gradient differences, and then using them to determine the Hessian-approximation through the (L-)BFGS update rule. The positive-definiteness of the approximation is either guaranteed through skipping or through a Wolfe line-search. | {
"cite_N": [
"@cite_22",
"@cite_6",
"@cite_23",
"@cite_13",
"@cite_25",
"@cite_20"
],
"mid": [
"1592294486",
"1491622225",
"2042173174",
"2964303576",
"",
"2963941964"
],
"abstract": [
"Global convergence of an online (stochastic) limited memory version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method for solving optimization problems with stochastic objectives that arise in large scale machine learning is established. Lower and upper bounds on the Hessian eigenvalues of the sample functions are shown to suffice to guarantee that the curvature approximation matrices have bounded determinants and traces, which, in turn, permits establishing convergence to optimal arguments with probability 1. Experimental evaluation on a search engine advertising problem showcase reductions in convergence time relative to stochastic gradient descent algorithms.",
"We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperforms previous stochastic gradient methods for parameter estimation in conditional random fields. We are working on analyzing the convergence of online (L)BFGS, and extending it to nonconvex optimization problems.",
"RES, a regularized stochastic version of the Broyden–Fletcher–Goldfarb–Shanno (BFGS) quasi-Newton method, is proposed to solve strongly convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second-order methods, on the other hand, is impracticable because the computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost, RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian eigenvalues of the sample functions are sufficient to guarantee almost sure convergence of a subsequence generated by RES and convergence of the sequence in expectation to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.",
"In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle ( @math ). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case iteration complexity. When a randomly chosen iterate is returned as the output of such an algorithm, we prove that in the worst case, the @math -calls complexity is @math to ensure that the expectation of the squared norm of the gradient is smaller than the given accuracy tolerance @math . We also propose a specific algorithm, namely, a stochastic damped limited-memory BFGS (SdLBFGS) method, that falls under the proposed framework. Moreover, we incorporate the stochastic variance reduced gradient variance reduction technique into the proposed SdLBFGS method and analyze its @math -calls compl...",
"",
"The question of how to incorporate curvature information into stochastic approximation methods is challenging. The direct application of classical quasi-Newton updating techniques for deterministic optimization leads to noisy curvature estimates that have harmful effects on the robustness of the iteration. In this paper, we propose a stochastic quasi-Newton method that is efficient, robust, and scalable. It employs the classical BFGS update formula in its limited memory form, and is based on the observation that it is beneficial to collect curvature information pointwise, and at spaced intervals. One way to do this is through (subsampled) Hessian-vector products. This technique differs from the classical approach that would compute differences of gradients at every iteration, and where controlling the quality of the curvature estimates can be difficult. We present numerical results on problems arising in machine learning that suggest that the proposed method shows much promise."
]
} |
1511.01293 | 1899714018 | The interest in 3D dynamical tracking is growing in fields such as robotics, biology and fluid dynamics. Recently, a major source of progress in 3D tracking has been the study of collective behaviour in biological systems, where the trajectories of individual animals moving within large and dense groups need to be reconstructed to understand the behavioural interaction rules. Experimental data in this field are generally noisy and at low spatial resolution, so that individuals appear as small featureless objects and trajectories must be retrieved by making use of epipolar information only. Moreover, optical occlusions often occur: in a multi-camera system one or more objects become indistinguishable in one view, potentially jeopardizing the conservation of identity over long-time trajectories. The most advanced 3D tracking algorithms overcome optical occlusions making use of set-cover techniques, which however have to solve NP-hard optimization problems. Moreover, current methods are not able to cope with occlusions arising from actual physical proximity of objects in 3D space. Here, we present a new method designed to work directly in 3D space and time, creating (3D+1) clouds of points representing the full spatio-temporal evolution of the moving targets. We can then use a simple connected components labeling routine, which is linear in time, to solve optical occlusions, hence lowering from NP to P the complexity of the problem. Finally, we use normalized cut spectral clustering to tackle 3D physical proximity. | The first @math tracking algorithms dealing with featureless objects were developed in the field of fluid dynamics, where the motion of passive tracer particles is studied to investigate turbulent fluid flows. The most successful algorithm in this field is the one presented in @cite_16 , which solves occlusion-related ambiguities locally in time, potentially producing fragmented trajectories. However, in the study of turbulence one can actually tune the density of tracers, so decreasing the optical density to a point where this is no longer critical. Clearly, this cannot be done in biological systems. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2067472975"
],
"abstract": [
"A neural network particle finding algorithm and a new four-frame predictive tracking algorithm are proposed for three-dimensional Lagrangian particle tracking (LPT). A quantitative comparison of these and other algorithms commonly used in three-dimensional LPT is presented. Weighted averaging, one-dimensional and two-dimensional Gaussian fitting, and the neural network scheme are considered for determining particle centers in digital camera images. When the signal to noise ratio is high, the one-dimensional Gaussian estimation scheme is shown to achieve a good combination of accuracy and efficiency, while the neural network approach provides greater accuracy when the images are noisy. The effect of camera placement on both the yield and accuracy of three-dimensional particle positions is investigated, and it is shown that at least one camera must be positioned at a large angle with respect to the other cameras to minimize errors. Finally, the problem of tracking particles in time is studied. The nearest neighbor algorithm is compared with a three-frame predictive algorithm and two four-frame algorithms. These four algorithms are applied to particle tracks generated by direct numerical simulation both with and without a method to resolve tracking conflicts. The new four-frame predictive algorithm with no conflict resolution is shown to give the best performance. Finally, the best algorithms are verified to work in a real experimental environment."
]
} |
1511.00666 | 1918740440 | The abelian sandpile model defines a Markov chain whose states are integer-valued functions on the vertices of a simple connected graph @math . By viewing this chain as a (nonreversible) random walk on an abelian group, we give a formula for its eigenvalues and eigenvectors in terms of multiplicative harmonic functions' on the vertices of @math . We show that the spectral gap of the sandpile chain is within a constant factor of the length of the shortest non-integer vector in the dual Laplacian lattice, while the mixing time is at most a constant times the smoothing parameter of the Laplacian lattice. We find a surprising inverse relationship between the spectral gap of the sandpile chain and that of simple random walk on @math : If the latter has a sufficiently large spectral gap, then the former has a small gap! In the case where @math is the complete graph on @math vertices, we show that the sandpile chain exhibits cutoff at time @math . | We were drawn to the topic of sandpile mixing times in part by its intrinsic mathematical interest and in part by questions arising in statistical physics. Many questions about the sandpile model, even if they appear unrelated to mixing, seem to lead inevitably to the study of its mixing time. For instance, the failure of the density conjecture' @cite_0 @cite_12 @cite_3 and the distinction between the stationary and threshold states @cite_22 are consequences of slow mixing. Our characterization of the eigenvalues of the sandpile chain may also help explain the recent finding of memory on two time scales' for the sandpile dynamics on the @math -dimensional square grid @cite_2 . | {
"cite_N": [
"@cite_22",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_12"
],
"mid": [
"2158827560",
"2055264800",
"2128709753",
"2081564414",
"2016111952"
],
"abstract": [
"We prove a precise relationship between the threshold state of the fixed-energy sandpile and the stationary state of Dhar’s abelian sandpile: in the limit as the initial condition s 0 tends to ( - ) , the former is obtained by size-biasing the latter according to burst size, an avalanche statistic. The question of whether and how these two states are related has been a subject of some controversy since 2000.",
"We consider the Abelian sandpile model (ASM) on the square lattice with a single dissipative site (sink). Particles are added one by one per unit time at random sites and the resulting density of particles is calculated as a function of time. We observe different scenarios of evolution depending on the value of initial uniform density (height) h0. During the first stage of the evolution, the density of particles increases linearly. Reaching a critical density ρc(h0), the system changes its behavior and relaxes exponentially to the stationary state of the ASM with density ρs. Considering initial heights −1≤h0≤4, we observe a dramatic decrease of the difference ρc(h0)−ρs when h0 is zero or negative. In parallel with the ASM, we consider the conservative fixed energy sandpile (FES). The extensive Monte Carlo simulations show that the threshold density ρth(h0) of the FES converges rapidly to ρs for h0<1.",
"A popular theory of self-organized criticality relates driven dissipative systems to systems with conservation. This theory predicts that the stationary density of the Abelian sandpile model equals the threshold density of the fixed-energy sandpile. We refute this prediction for a wide variety of underlying graphs, including the square grid. Driven dissipative sandpiles continue to evolve even after reaching criticality. This result casts doubt on the validity of using fixed-energy sandpiles to explore the critical behavior of the Abelian sandpile model at stationarity.",
"We report results of a numerical analysis of the memory effects in two-dimensional Abelian sandpiles. It is found that a sandpile forgets its instantaneous configuration in two distinct stages: a fast stage and a slow stage, whose durations roughly scale as N and N2 respectively, where N is the linear size of the sandpile. We confirm the presence of the longer time-scale by an independent diagnostic based on analysing emission probabilities of a hidden Markov model applied to a time-averaged sequence of avalanche sizes. The application of hidden Markov modelling to the output of sandpiles is novel. It discriminates effectively between a sandpile time series and a shuffled control time series with the same time-averaged event statistics and hence deserves further development as a pattern-recognition tool for Abelian sandpiles.",
""
]
} |
1511.00666 | 1918740440 | The abelian sandpile model defines a Markov chain whose states are integer-valued functions on the vertices of a simple connected graph @math . By viewing this chain as a (nonreversible) random walk on an abelian group, we give a formula for its eigenvalues and eigenvectors in terms of multiplicative harmonic functions' on the vertices of @math . We show that the spectral gap of the sandpile chain is within a constant factor of the length of the shortest non-integer vector in the dual Laplacian lattice, while the mixing time is at most a constant times the smoothing parameter of the Laplacian lattice. We find a surprising inverse relationship between the spectral gap of the sandpile chain and that of simple random walk on @math : If the latter has a sufficiently large spectral gap, then the former has a small gap! In the case where @math is the complete graph on @math vertices, we show that the sandpile chain exhibits cutoff at time @math . | The significance of the characters of @math for the sandpile model was first remarked by Dhar, Ruelle, Sen and Verma @cite_25 , who in particular analyzed the sandpile group of the square grid graph with sink at the boundary. Our multiplicative harmonic functions @math are related to the toppling invariants @math of @cite_25 by @math . | {
"cite_N": [
"@cite_25"
],
"mid": [
"2058525073"
],
"abstract": [
"The Abelian sandpile models feature a finite Abelian group G generated by the operators corresponding to particle addition at various sites. We study the canonical decomposition of G as a product of cyclic groups G=Z(d1)*Z(d2)*Z(d3)...*Z(dg), where g is the least number of generators of G, and di is a multiple of di+1. The structure of G is determined in terms of the toppling matrix Delta . We construct scalar functions, linear in the height variables of the pile, that are invariant under toppling at any site. These invariants provide convenient coordinates to label the recurrent configurations of the sandpile. For an L*L square lattice, we show that g=L. In this case, we observe that the system has non-trivial symmetries, transcending the obvious symmetries of the square, namely those coming from the action of the cyclotomic Galois group GalL of the 2(L+1)th roots of unity (which operates on the set of eigenvalues of h). These eigenvalues are algebraic integers, the product of which is the order mod G mod . With the help of GalL we are able to group the eigenvalues into certain subsets the products of which are separately integers, and thus obtain an explicit factorization of mod G mod . We also use GalL to define other simpler sets of toppling invariants."
]
} |
1511.00666 | 1918740440 | The abelian sandpile model defines a Markov chain whose states are integer-valued functions on the vertices of a simple connected graph @math . By viewing this chain as a (nonreversible) random walk on an abelian group, we give a formula for its eigenvalues and eigenvectors in terms of multiplicative harmonic functions' on the vertices of @math . We show that the spectral gap of the sandpile chain is within a constant factor of the length of the shortest non-integer vector in the dual Laplacian lattice, while the mixing time is at most a constant times the smoothing parameter of the Laplacian lattice. We find a surprising inverse relationship between the spectral gap of the sandpile chain and that of simple random walk on @math : If the latter has a sufficiently large spectral gap, then the former has a small gap! In the case where @math is the complete graph on @math vertices, we show that the sandpile chain exhibits cutoff at time @math . | The pseudoinverse @math has appeared before in the context of sandpiles: It is used by Bj " o rner, Lov ' a sz and Shor @cite_20 to bound the number of topplings until a configuration stabilizes; see @cite_14 for a recent improvement. In addition, the pseudoinverse is a crucial ingredient in the energy pairing' of Baker and Shokrieh @cite_32 . | {
"cite_N": [
"@cite_14",
"@cite_32",
"@cite_20"
],
"mid": [
"1523313899",
"2066653142",
""
],
"abstract": [
"A new bound (Theorem ) for the duration of the chip-firing game with @math chips on a @math -vertex graph is obtained, by a careful analysis of the pseudo-inverse of the discrete Laplacian matrix of the graph. This new bound is expressed in terms of the entries of the pseudo-inverse. It is shown (Section 5) to be always better than the classic bound due to Bj \"o rner, Lov ' a sz and Shor. In some cases the improvement is dramatic. For instance: for strongly regular graphs the classic and the new bounds reduce to @math and @math , respectively. For dense regular graphs - @math - the classic and the new bounds reduce to @math and @math , respectively. This is a snapshot of a work in progress, so further results in this vein are in the works.",
"We study the interplay between chip-firing games and potential theory on graphs, characterizing reduced divisors (G-parking functions) on graphs as the solution to an energy (or potential) minimization problem and providing an algorithm to efficiently compute reduced divisors. Applications include an ''efficient bijective'' proof of [email protected]?s matrix-tree theorem and a new algorithm for finding random spanning trees. The running times of our algorithms are analyzed using potential theory, and we show that the bounds thus obtained generalize and improve upon several previous results in the literature.",
""
]
} |
1511.00916 | 2231937797 | Programmers may be hesitant to use declarative systems, because of the associated learning curve. In this paper, we present an API that integrates the IDP Knowledge Base system into the Python programming language. IDP is a state-of-the-art logical system, which uses SAT, SMT, Logic Programming and Answer Set Programming technology. Python is currently one of the most widely used (teaching) languages for programming. The first goal of our API is to allow a Python programmer to use the declarative power of IDP, without needing to learn any new syntax or semantics. The second goal is allow IDP to be added to removed from an existing code base with minimal changes. | In @cite_5 , an approach is presented in which a constraint solver is not added to a single host language, but can be used in the development of a domain-specific language in Racket. Like ours, the motivation behind this work is to allow the power of declarative systems to be more widely used. However, their approach differs, because they count on an intermediary---the designer of the domain-specific language---to hide the complexity of the declarative system, whereas our approach focuses on creating an interface that is natural enough to offer KB functionality directly. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2163671349"
],
"abstract": [
"SAT and SMT solvers have automated a spectrum of programming tasks, including program synthesis, code checking, bug localization, program repair, and programming with oracles. In principle, we obtain all these benefits by translating the program (once) to a constraint system understood by the solver. In practice, however, compiling a language to logical formulas is a tricky process, complicated by having to map the solution back to the program level and extend the language with new solver-aided constructs, such as symbolic holes used in synthesis. This paper introduces ROSETTE, a framework for designing solver-aided languages. ROSETTE is realized as a solver-aided language embedded in Racket, from which it inherits extensive support for meta-programming. Our framework frees designers from having to compile their languages to constraints: new languages, and their solver-aided constructs, are defined by shallow (library-based) or deep (interpreter-based) embedding in ROSETTE itself. We describe three case studies, by ourselves and others, of using ROSETTE to implement languages and synthesizers for web scraping, spatial programming, and superoptimization of bitvector programs."
]
} |
1511.00916 | 2231937797 | Programmers may be hesitant to use declarative systems, because of the associated learning curve. In this paper, we present an API that integrates the IDP Knowledge Base system into the Python programming language. IDP is a state-of-the-art logical system, which uses SAT, SMT, Logic Programming and Answer Set Programming technology. Python is currently one of the most widely used (teaching) languages for programming. The first goal of our API is to allow a Python programmer to use the declarative power of IDP, without needing to learn any new syntax or semantics. The second goal is allow IDP to be added to removed from an existing code base with minimal changes. | In @cite_0 , a constraint solver is integrated into the Scala language. As ours does, their approach reuses the syntax of the host language to interface with the declarative system. A key difference is that, in their approach, the programmer is explicitly manipulating, combining and solving constraints, which makes the constraint solver more present in the eventual source code. A second difference is of course that Scala currently appears to be less widely known than Python. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2115261880"
],
"abstract": [
"We present an extension of Scala that supports constraint programming over bounded and unbounded domains. The resulting language, Kaplan, provides the benefits of constraint programming while preserving the existing features of Scala. Kaplan integrates constraint and imperative programming by using constraints as an advanced control structure; the developers use the monadic 'for' construct to iterate over the solutions of constraints or branch on the existence of a solution. The constructs we introduce have simple semantics that can be understood as explicit enumeration of values, but are implemented more efficiently using symbolic reasoning. Kaplan programs can manipulate constraints at run-time, with the combined benefits of type-safe syntax trees and first-class functions. The language of constraints is a functional subset of Scala, supporting arbitrary recursive function definitions over algebraic data types, sets, maps, and integers. Our implementation runs on a platform combining a constraint solver with a standard virtual machine. For constraint solving we use an algorithm that handles recursive function definitions through fair function unrolling and builds upon the state-of-the art SMT solver Z3. We evaluate Kaplan on examples ranging from enumeration of data structures to execution of declarative specifications. We found Kaplan promising because it is expressive, supporting a range of problem domains, while enabling full-speed execution of programs that do not rely on constraint programming."
]
} |
1511.00916 | 2231937797 | Programmers may be hesitant to use declarative systems, because of the associated learning curve. In this paper, we present an API that integrates the IDP Knowledge Base system into the Python programming language. IDP is a state-of-the-art logical system, which uses SAT, SMT, Logic Programming and Answer Set Programming technology. Python is currently one of the most widely used (teaching) languages for programming. The first goal of our API is to allow a Python programmer to use the declarative power of IDP, without needing to learn any new syntax or semantics. The second goal is allow IDP to be added to removed from an existing code base with minimal changes. | In @cite_7 , a reasoner for FO extended with transitive closure is integrated into Java. Their KB language is therefore very similar to (but more restricted than) that of IDP. When it comes to the integration in Java, there are two main differences to our approach. First, the declarative knowledge is not written in expressions in the host language, but in a separate language (the Alloy-like JFSL @cite_3 ). Second, the integration into Java is done in an object-oriented way: the programmer defines classes in which formulas are added as, among others, class invariants, method pre- postconditions and frame conditions. In comparison, our Python API seems more lightweight, since it does not require an object-oriented approach. When it comes to computational performance, @cite_7 reports good results, which our implementation is not able to match. | {
"cite_N": [
"@cite_3",
"@cite_7"
],
"mid": [
"2115900163",
"2168617729"
],
"abstract": [
"This thesis presents a new light-weight specification language called JForge Specification Language (JFSL) for object-oriented languages such as Java. The language is amenable to bounded verification analysis by a tool called JForge that interprets JFSL specifications, fully integrates with a mainstream development environment, and assists programmers in examining counter example traces and debugging specifications. JFSL attempts to address challenges of specification languages such as inheritance, frame conditions, dynamic dispatch, and method calls inside specifications in the context of bounded verification. A collection of verification tasks illustrates the expressiveness and conciseness of JForge specifications and demonstrates effectiveness of the bounded verification technique. Thesis Supervisor: Daniel N. Jackson Title: Professor",
"We present a unified environment for running declarative specifications in the context of an imperative object-Oriented programming language. Specifications are Alloy-like, written in first-order relational logic with transitive closure, and the imperative language is Java. By being able to mix imperative code with executable declarative specifications, the user can easily express constraint problems in place, i.e., in terms of the existing data structures and objects on the heap. After a solution is found, the heap is updated to reflect the solution, so the user can continue to manipulate the program heap in the usual imperative way. We show that this approach is not only convenient, but, for certain problems can also outperform a standard imperative implementation. We also present an optimization technique that allowed us to run our tool on heaps with almost 2000 objects."
]
} |
1511.00628 | 2284177844 | Emerging location-based systems and data analysis frameworks requires efficient management of spatial data for approximate and exact search. Exact similarity search can be done using space partitioning data structures, such as Kd-tree, R*-tree, and Ball-tree. In this paper, we focus on Ball-tree, an efficient search tree that is specific for spatial queries which use euclidean distance. Each node of a Ball-tree defines a ball, i.e. a hypersphere that contains a subset of the points to be searched. In this paper, we propose Ball*-tree, an improved Ball-tree that is more efficient for spatial queries. Ball*-tree enjoys a modified space partitioning algorithm that considers the distribution of the data points in order to find an efficient splitting hyperplane. Also, we propose a new algorithm for KNN queries with restricted range using Ball*-tree, which performs better than both KNN and range search for such queries. Results show that Ball*-tree performs 39 -57 faster than the original Ball-tree algorithm. | Several types of metric trees has been recently proposed. We first introduce the idea presented by Omohundro @cite_39 and Uhlmann @cite_6 . Unlike KD-trees, metric-trees do not require data to be in vector form. Hence, metric-trees can be applied to any data representation as long as the data is in the metric space @cite_8 . For a detailed performance evaluation against established NN search methods, see @cite_2 @cite_9 @cite_28 . | {
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_6",
"@cite_39",
"@cite_2"
],
"mid": [
"2129430460",
"1556653975",
"1824657313",
"",
"1496508106",
"1738361408"
],
"abstract": [
"Abstract The Approximating and Eliminating Search Algorithm (AESA) can currently be considered as one of the most efficient procedures for finding Nearest Neighbours in Metric Spaces where distances computation is expensive. One of the major bottlenecks of the AESA, however, is its quadratic preprocessing time and memory space requirements which, in practice, can severely limit the applicability of the algorithm for large sets of data. In this paper a new version of the AESA is introduced which only requires linear preprocessing time and memory. The performance of the new version, referred to as ‘Linear AESA’ (LAESA), is studied through a number of simulation experiments in abstract metric spaces. The results show that LAESA achieves a search performance similar to that of the AESA, while definitely overcoming the quadratic costs bottleneck.",
"Nearest neighbour search (NNS) is an old problem that is of practical importance in a number of fields. It involves finding, for a given point q, called the query, one or more points from a given set of points that are nearest to the query q. Since the initial inception of the problem a great number of algorithms and techniques have been proposed for its solution. However, it remains the case that many of the proposed algorithms have not been compared against each other on a wide variety of datasets. This research attempts to fill this gap to some extent by presenting a detailed empirical comparison of three prominent data structures for exact NNS: KD-Trees, Metric Trees, and Cover Trees. Our results suggest that there is generally little gain in using Metric Trees or Cover Trees instead of KD-Trees for the standard NNS problem.",
"The nearest neighbor (NN) technique is very simple, highly efficient and effective in the field of pattern recognition, text categorization, object recognition etc. Its simplicity is its main advantage, but the disadvantages can't be ignored even. The memory requirement and computation complexity also matter. Many techniques are developed to overcome these limitations. NN techniques are broadly classified into structure less and structure based techniques. In this paper, we present the survey of such techniques. Weighted kNN, Model based kNN, Condensed NN, Reduced NN, Generalized NN are structure less techniques whereas k-d tree, ball tree, Principal Axis Tree, Nearest Feature Line, Tunable NN, Orthogonal Search Tree are structure based algorithms developed on the basis of kNN. The structure less method overcome memory limitation and structure based techniques reduce the computational complexity.",
"",
"Balltrees are simple geometric data structures with a wide range of practical applica tions to geometric ·learning tasks. In this report we compare 5 different algorithms for . constructing ball trees from data. We study the trade-off between construction time and the quality of the constructed tree. Two of the algorithms are on-line, two construct the structures from the data set in a top down fashion, and one uses a bottom up approach. We empirically study the algorithms on random data drawn from eight different probability distributions representing smooth, clustered, and curve distributed data in different ambient space dimen sions. We find that the bottom up approach usually produces the best trees but has the longest construction time. The other approaches have uses in specific circumstances. 1. IntemauonaI.Computer Science Institute, Berkeley, CA.",
"Now a day’s many algorithms are invented being inventing to find the solution for Euclidean Minimum Spanning Tree (EMST) problem, as its applicability is increasing in much wide range of fields containing spatial spatio – temporal data viz. astronomy which consists of millions of spatial data. To solve this problem, we are presenting a technique by adopting the dual tree algorithm for finding efficient EMST and experimented on a variety of real time and synthetic datasets. This paper presents the observed experimental observations and the efficiency of the dual tree framework,in the context of kd-tree and ball-tree on spatial datasets of different dimensions."
]
} |
1511.00098 | 2950960124 | Matching cross-view images is challenging because the appearance and viewpoints are significantly different. While low-level features based on gradient orientations or filter responses can drastically vary with such changes in viewpoint, semantic information of images however shows an invariant characteristic in this respect. Consequently, semantically labeled regions can be used for performing cross-view matching. In this paper, we therefore explore this idea and propose an automatic method for detecting and representing the semantic information of an RGB image with the goal of performing cross-view matching with a (non-RGB) geographic information system (GIS). A segmented image forms the input to our system with segments assigned to semantic concepts such as traffic signs, lakes, roads, foliage, etc. We design a descriptor to robustly capture both, the presence of semantic concepts and the spatial layout of those segments. Pairwise distances between the descriptors extracted from the GIS map and the query image are then used to generate a shortlist of the most promising locations with similar semantic concepts in a consistent spatial layout. An experimental evaluation with challenging query images and a large urban area shows promising results. | Visual location recognition and image retrieval system emphasise the indexing aspect and can handle large image collections: Bag-of-visual-words @cite_5 , vocabulary trees @cite_25 or global image descriptors such as Fisher vectors @cite_19 have been proposed for that purpose, for example. All those schemes do not account for any higher-level semantic information. More recently, @cite_14 has therefore introduced a scheme where pooling regions for local image descriptors are defined in a semantic way: detectors assign each segment a class label and a separate descriptor (e.g. a Fisher Vector) is computed for each such segment. Those descriptors rely on local appearance features, which fail to handle significant viewpoint changes faced in the cross-view matching problem considered in our paper. Also, this approach does not encode the spatial layout semantic segments. If the descriptors are sufficiently discriminative by themselves, encoding this spatial layout is less important. In our case however, the information available in the query image which is shared with the GIS only captures class labels and a very coarse estimate of the segment shapes. It is therefore necessary to capture both, the presence of semantic concepts and the spatial layout between those concepts, in a joint representation. | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_14",
"@cite_25"
],
"mid": [
"1984309565",
"2131846894",
"2049953265",
"2128017662"
],
"abstract": [
"This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core.",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"In this paper, we propose a new method for taking into account the spatial information in image categorization. More specifically, we remove the loss of spatial information in Bag of Words related methods by computing the image signature over specific regions selected by object detectors. We propose to select the detectors using Multiple Kernel Learning techniques. We carry out experiments on the well known VOC 2007 dataset, and show our semantic pooling obtains promising results.",
"A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CDs. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images."
]
} |
1511.00098 | 2950960124 | Matching cross-view images is challenging because the appearance and viewpoints are significantly different. While low-level features based on gradient orientations or filter responses can drastically vary with such changes in viewpoint, semantic information of images however shows an invariant characteristic in this respect. Consequently, semantically labeled regions can be used for performing cross-view matching. In this paper, we therefore explore this idea and propose an automatic method for detecting and representing the semantic information of an RGB image with the goal of performing cross-view matching with a (non-RGB) geographic information system (GIS). A segmented image forms the input to our system with segments assigned to semantic concepts such as traffic signs, lakes, roads, foliage, etc. We design a descriptor to robustly capture both, the presence of semantic concepts and the spatial layout of those segments. Pairwise distances between the descriptors extracted from the GIS map and the query image are then used to generate a shortlist of the most promising locations with similar semantic concepts in a consistent spatial layout. An experimental evaluation with challenging query images and a large urban area shows promising results. | Very recently, 's work @cite_23 considered the matching problem between street-level image and a dataset of semantic objects. Specifically, deformable-part-models (DPMs) were trained to detect distinctive objects in urban areas from a single street-level image. The main objective of that paper was improved object detection with a geometric verification stage using a database of objects with known locations and the GPS-tag and viewing direction of the image has been assumed to be known roughly. They also present an exhaustive search based approach to matching an image against the entire object database. The considered DPMs in @cite_23 are well-localized and can be reduced to the centroid of the detection bounding box for a subsequent RANSAC step which searches for the best 2D-affine alignment in image space. Therefore, @cite_23 can only handle such "spot" based information and is not designed to handle less accurate and potentially larger semantic segments with uncertain locations such as the ones provided by classifiers for road' or lake'. | {
"cite_N": [
"@cite_23"
],
"mid": [
"349800315"
],
"abstract": [
"Geographical Information System (GIS) databases contain information about many objects, such as traffic signals, road signs, fire hydrants, etc. in urban areas. This wealth of information can be utilized for assisting various computer vision tasks. In this paper, we propose a method for improving object detection using a set of priors acquired from GIS databases. Given a database of object locations from GIS and a query image with metadata, we compute the expected spatial location of the visible objects in the image. We also perform object detection in the query image (e.g., using DPM) and obtain a set of candidate bounding boxes for the objects. Then, we fuse the GIS priors with the potential detections to find the final object bounding boxes. To cope with various inaccuracies and practical complications, such as noisy metadata, occlusion, inaccuracies in GIS, and poor candidate detections, we formulate our fusion as a higher-order graph matching problem which we robustly solve using RANSAC. We demonstrate that this approach outperforms well established object detectors, such as DPM, with a large margin."
]
} |
1511.00352 | 2412136470 | Many methods have been proposed for detecting emerging events in text streams using topic modeling. However, these methods have shortcomings that make them unsuitable for rapid detection of locally emerging events on massive text streams. We describe Spatially Compact Semantic Scan (SCSS) that has been developed specifically to overcome the shortcomings of current methods in detecting new spatially compact events in text streams. SCSS employs alternating optimization between using semantic scan to estimate contrastive foreground topics in documents, and discovering spatial neighborhoods with high occurrence of documents containing the foreground topics. We evaluate our method on Emergency Department chief complaints dataset (ED dataset) to verify the effectiveness of our method in detecting real-world disease outbreaks from free-text ED chief complaint data. | The and sampling-based inference methods described in ( @cite_22 ) are very similar to the Semantic Scan setting we describe below, and perhaps the closest work in literature to which we can compare our method. Gibbs2, Gibbs3, and Semantic Scan begin by learning topic assignments for words in the background documents and then begin inference on the foreground documents. All three methods also hold the topic assignments for words in background documents fixed, while performing sampling for topic assignments in the foreground documents. However, a key distinction is that our method allows additional new topics to be assigned to words in foreground documents, while Gibbs2 and Gibbs3 do not. We will see that allowing new topics to be learned entirely from foreground documents leads to precise topics that characterize emerging events in the text stream well. In fact, setting the number of new topics in Semantic Scan to 0 gives us Gibbs3. This is because the background topics are not allowed to change once they have been learned in both Semantic Scan and Gibbs3. Thus, Semantic Scan generalizes Gibbs3 for the purpose of emerging event detection. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2150731624"
],
"abstract": [
"Topic models provide a powerful tool for analyzing large text collections by representing high dimensional data in a low dimensional subspace. Fitting a topic model given a set of training documents requires approximate inference techniques that are computationally expensive. With today's large-scale, constantly expanding document collections, it is useful to be able to infer topic distributions for new documents without retraining the model. In this paper, we empirically evaluate the performance of several methods for topic inference in previously unseen documents, including methods based on Gibbs sampling, variational inference, and a new method inspired by text classification. The classification-based inference method produces results similar to iterative inference methods, but requires only a single matrix multiplication. In addition to these inference methods, we present SparseLDA, an algorithm and data structure for evaluating Gibbs sampling distributions. Empirical results indicate that SparseLDA can be approximately 20 times faster than traditional LDA and provide twice the speedup of previously published fast sampling methods, while also using substantially less memory."
]
} |
1511.00352 | 2412136470 | Many methods have been proposed for detecting emerging events in text streams using topic modeling. However, these methods have shortcomings that make them unsuitable for rapid detection of locally emerging events on massive text streams. We describe Spatially Compact Semantic Scan (SCSS) that has been developed specifically to overcome the shortcomings of current methods in detecting new spatially compact events in text streams. SCSS employs alternating optimization between using semantic scan to estimate contrastive foreground topics in documents, and discovering spatial neighborhoods with high occurrence of documents containing the foreground topics. We evaluate our method on Emergency Department chief complaints dataset (ED dataset) to verify the effectiveness of our method in detecting real-world disease outbreaks from free-text ED chief complaint data. | ( @cite_21 ) presents a graphical model which relaxes the assumption of Markovian evolution of the natural parameters of the topic model. Instead, each topic is associated with a continuous beta distribution over timestamps normalized to the interval @math . The topics remain static over time, however, the occurrence of topics in the corpus varies with time. However, the assumption that the number of topics is constant over time and that only the topic parameters evolve smoothly with time is still present in this model. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2171343266"
],
"abstract": [
"This paper presents an LDA-style topic model that captures not only the low-dimensional structure of data, but also how the structure changes over time. Unlike other recent work that relies on Markov assumptions or discretization of time, here each topic is associated with a continuous distribution over timestamps, and for each generated document, the mixture distribution over topics is influenced by both word co-occurrences and the document's timestamp. Thus, the meaning of a particular topic can be relied upon as constant, but the topics' occurrence and correlations change significantly over time. We present results on nine months of personal email, 17 years of NIPS research papers and over 200 years of presidential state-of-the-union addresses, showing improved topics, better timestamp prediction, and interpretable trends."
]
} |
1511.00352 | 2412136470 | Many methods have been proposed for detecting emerging events in text streams using topic modeling. However, these methods have shortcomings that make them unsuitable for rapid detection of locally emerging events on massive text streams. We describe Spatially Compact Semantic Scan (SCSS) that has been developed specifically to overcome the shortcomings of current methods in detecting new spatially compact events in text streams. SCSS employs alternating optimization between using semantic scan to estimate contrastive foreground topics in documents, and discovering spatial neighborhoods with high occurrence of documents containing the foreground topics. We evaluate our method on Emergency Department chief complaints dataset (ED dataset) to verify the effectiveness of our method in detecting real-world disease outbreaks from free-text ED chief complaint data. | Non-paramteric topics over time ( @cite_17 ) is a variation of the algorithm that allows the number of topics to be determined from the corpus. However, the topics are still constrained to evolve smoothly over time. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2064513326"
],
"abstract": [
"A single, stationary topic model such as latent Dirichlet allocation is inappropriate for modeling corpora that span long time periods, as the popularity of topics is likely to change over time. A number of models that incorporate time have been proposed, but in general they either exhibit limited forms of temporal variation, or require computationally expensive inference methods. In this paper we propose nonparametric Topics over Time (npTOT), a model for time-varying topics that allows an unbounded number of topics and flexible distribution over the temporal variations in those topics’ popularity. We develop a collapsed Gibbs sampler for the proposed model and compare against existing models on synthetic and real document sets."
]
} |
1511.00352 | 2412136470 | Many methods have been proposed for detecting emerging events in text streams using topic modeling. However, these methods have shortcomings that make them unsuitable for rapid detection of locally emerging events on massive text streams. We describe Spatially Compact Semantic Scan (SCSS) that has been developed specifically to overcome the shortcomings of current methods in detecting new spatially compact events in text streams. SCSS employs alternating optimization between using semantic scan to estimate contrastive foreground topics in documents, and discovering spatial neighborhoods with high occurrence of documents containing the foreground topics. We evaluate our method on Emergency Department chief complaints dataset (ED dataset) to verify the effectiveness of our method in detecting real-world disease outbreaks from free-text ED chief complaint data. | Online LDA ( @cite_8 ) employs online variational Bayes inference to determine the posterior distribution over the latent variables of the topic model. The algorithm is based on online stochastic optimization and is shown to provide equivalently good topics in lesser time compared to batch variational Bayes algorithm. The algorithm requires a learning rate @math for convergence. This parameters specifies the rate at which the old parameters are forgotten. Thus, there is an assumption of parameter smoothness which can delay the detection of suddenly emerging topics in a text stream. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2165599843"
],
"abstract": [
"We develop an online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA). Online LDA is based on online stochastic optimization with a natural gradient step, which we show converges to a local optimum of the VB objective function. It can handily analyze massive document collections, including those arriving in a stream. We study the performance of online LDA in several ways, including by fitting a 100-topic topic model to 3.3M articles from Wikipedia in a single pass. We demonstrate that online LDA finds topic models as good or better than those found with batch VB, and in a fraction of the time."
]
} |
1511.00352 | 2412136470 | Many methods have been proposed for detecting emerging events in text streams using topic modeling. However, these methods have shortcomings that make them unsuitable for rapid detection of locally emerging events on massive text streams. We describe Spatially Compact Semantic Scan (SCSS) that has been developed specifically to overcome the shortcomings of current methods in detecting new spatially compact events in text streams. SCSS employs alternating optimization between using semantic scan to estimate contrastive foreground topics in documents, and discovering spatial neighborhoods with high occurrence of documents containing the foreground topics. We evaluate our method on Emergency Department chief complaints dataset (ED dataset) to verify the effectiveness of our method in detecting real-world disease outbreaks from free-text ED chief complaint data. | ( @cite_12 ) propose an online version of LDA for topic detection and tracking. However, the method makes a strong assumption about the evolution of topics: the parameter of the Dirichlet prior generating a topic is a linear combination of the topic vector from the previous @math iterations of the algorithm. The smoothness and strict form imposed on the evolution of topics will not allow the method to detect rapidly emerging topics or subtle spatially localized topics hidden in the stream. In addition, the assumption is that the number of topics is constant over time and only the topic parameters evolve smoothly with time. There is no reason to believe that this is true, since the addition of a new topic does not mean that an old topic has disappeared from the corpus. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2110642088"
],
"abstract": [
"This paper presents online topic model (OLDA), a topic model that automatically captures the thematic patterns and identifies emerging topics of text streams and their changes over time. Our approach allows the topic modeling framework, specifically the latent Dirichlet allocation (LDA) model, to work in an online fashion such that it incrementally builds an up-to-date model (mixture of topics per document and mixture of words per topic) when a new document (or a set of documents) appears. A solution based on the empirical Bayes method is proposed. The idea is to incrementally update the current model according to the information inferred from the new stream of data with no need to access previous data. The dynamics of the proposed approach also provide an efficient mean to track the topics over time and detect the emerging topics in real time. Our method is evaluated both qualitatively and quantitatively using benchmark datasets. In our experiments, the OLDA has discovered interesting patterns by just analyzing a fraction of data at a time. Our tests also prove the ability of OLDA to align the topics across the epochs with which the evolution of the topics over time is captured. The OLDA is also comparable to, and sometimes better than, the original LDA in predicting the likelihood of unseen documents."
]
} |
1511.00352 | 2412136470 | Many methods have been proposed for detecting emerging events in text streams using topic modeling. However, these methods have shortcomings that make them unsuitable for rapid detection of locally emerging events on massive text streams. We describe Spatially Compact Semantic Scan (SCSS) that has been developed specifically to overcome the shortcomings of current methods in detecting new spatially compact events in text streams. SCSS employs alternating optimization between using semantic scan to estimate contrastive foreground topics in documents, and discovering spatial neighborhoods with high occurrence of documents containing the foreground topics. We evaluate our method on Emergency Department chief complaints dataset (ED dataset) to verify the effectiveness of our method in detecting real-world disease outbreaks from free-text ED chief complaint data. | ( @cite_3 ) extends the LDA model by allowing the natural multinomial parameters of LDA to evolve over consecutive time slices. This is the standard Markovian assumption of state space models. The model is best illustrated using the plate diagram shown in figure ) which shows how DTM extends the LDA topic model shown in figure ) and clearly illustrates the Markovian evolution of parameters in the topic model. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2072644219"
],
"abstract": [
"A family of probabilistic time series models is developed to analyze the time evolution of topics in large document collections. The approach is to use state space models on the natural parameters of the multinomial distributions that represent the topics. Variational approximations based on Kalman filters and nonparametric wavelet regression are developed to carry out approximate posterior inference over the latent topics. In addition to giving quantitative, predictive models of a sequential corpus, dynamic topic models provide a qualitative window into the contents of a large document collection. The models are demonstrated by analyzing the OCR'ed archives of the journal Science from 1880 through 2000."
]
} |
1511.00352 | 2412136470 | Many methods have been proposed for detecting emerging events in text streams using topic modeling. However, these methods have shortcomings that make them unsuitable for rapid detection of locally emerging events on massive text streams. We describe Spatially Compact Semantic Scan (SCSS) that has been developed specifically to overcome the shortcomings of current methods in detecting new spatially compact events in text streams. SCSS employs alternating optimization between using semantic scan to estimate contrastive foreground topics in documents, and discovering spatial neighborhoods with high occurrence of documents containing the foreground topics. We evaluate our method on Emergency Department chief complaints dataset (ED dataset) to verify the effectiveness of our method in detecting real-world disease outbreaks from free-text ED chief complaint data. | ( @cite_19 ) propose a hierarchical LDA model consisting of a set of pure topics which suffer variations with region to form regional topics that finally generate the documents. The model assumes a fixed number of regions, and that pure topics exist in the form of regional variants in every region. A region is modeled using a bivariate Gaussian distribution which assumes that each region has a center where its regional topics are concentrated and the effect of the regional topics decays away from this center. While consideriing a bivariate Gaussian distribution for modeling the location of documents in SCSS is possible, it implies that the effect of a topic decays away from the epicenter. This might not be true in the case of steady state of a disease outbreak where all documents in the affected spatial neighborhood may be equally likely to contain the emerging topic. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2142889507"
],
"abstract": [
"The rapid growth of geotagged social media raises new computational possibilities for investigating geographic linguistic variation. In this paper, we present a multi-level generative model that reasons jointly about latent topics and geographical regions. High-level topics such as \"sports\" or \"entertainment\" are rendered differently in each geographic region, revealing topic-specific regional distinctions. Applied to a new dataset of geotagged microblogs, our model recovers coherent topics and their regional variants, while identifying geographic areas of linguistic consistency. The model also enables prediction of an author's geographic location from raw text, outperforming both text regression and supervised topic models."
]
} |
1511.00352 | 2412136470 | Many methods have been proposed for detecting emerging events in text streams using topic modeling. However, these methods have shortcomings that make them unsuitable for rapid detection of locally emerging events on massive text streams. We describe Spatially Compact Semantic Scan (SCSS) that has been developed specifically to overcome the shortcomings of current methods in detecting new spatially compact events in text streams. SCSS employs alternating optimization between using semantic scan to estimate contrastive foreground topics in documents, and discovering spatial neighborhoods with high occurrence of documents containing the foreground topics. We evaluate our method on Emergency Department chief complaints dataset (ED dataset) to verify the effectiveness of our method in detecting real-world disease outbreaks from free-text ED chief complaint data. | Recent work ( @cite_6 ) suggests a theoretical justification as to why typical topic models do not work well on short documents like tweets. This justifies the additional novel contributions we need to incorporate in topic modeling to improve the outcome of topic modeling on a spatio-temporal corpus of short text documents. | {
"cite_N": [
"@cite_6"
],
"mid": [
"159693449"
],
"abstract": [
"Topic models such as the latent Dirichlet allocation (LDA) have become a standard staple in the modeling toolbox of machine learning. They have been applied to a vast variety of data sets, contexts, and tasks to varying degrees of success. However, to date there is almost no formal theory explicating the LDA's behavior, and despite its familiarity there is very little systematic analysis of and guidance on the properties of the data that affect the inferential performance of the model. This paper seeks to address this gap, by providing a systematic analysis of factors which characterize the LDA's performance. We present theorems elucidating the posterior contraction rates of the topics as the amount of data increases, and a thorough supporting empirical study using synthetic and real data sets, including news and web-based articles and tweet messages. Based on these results we provide practical guidance on how to identify suitable data sets for topic models, and how to specify particular model parameters."
]
} |
1511.00412 | 2230378714 | This paper examines the verification of stability, a control requirement, over discrete control systems represented as Simulink diagrams, using different model checking approaches and tools. Model checking comprises the (exhaustive) exploration of a model of a system, to determine if a requirement is satisfied. If that is not the case, examples of the requirement's violation within the system's model are provided, as witnesses. These examples are potentially complementary to previous work on automatic theorem proving, when a system is not proven to be stable, but no proof of instability can be provided. We experimentally evaluated the suitability of four model checking approaches to verify stability on a set of benchmarks including linear and nonlinear, controlled and uncontrolled, discrete systems, via Lyapunov's second method or Lyapunov's direct method. Our study included symbolic, bounded, statistical and hybrid model checking, through the open-source tools NuSMV, UCLID, S-TaLiRo and SpaceEx, respectively. Our experiments and results provide an insight on the strengths and limitations of these model checking approaches for the verification of control requirements for discrete systems at Simulink level. We found that statistical model checking with S-TaLiRo is the most suitable option to complement our previous work on automatic theorem proving. | In testing, inputs are applied to a system to stimulate actions and reactions, and outputs are observed to determine if the requirements are satisfied. The selection of inputs (test cases) needs to thoroughly explore the system's state space, whilst targeting its interesting regions (i.e., covering'' the system). Simulink is an ideal tool for testing models of control systems in simulation. Test generation systematically samples the state space of variables and parameters, e.g., through automated search @cite_11 . | {
"cite_N": [
"@cite_11"
],
"mid": [
"2036657965"
],
"abstract": [
"Search-based test-data generation has proved successful for code-level testing but almost no search-based work has been carried out at higher levels of abstraction. In this paper the application of such approaches at the higher levels of abstraction offered by MATLAB Simulink models is investigated and a wide-ranging framework for test-data generation and management is presented. Model-level analogues of code-level structural coverage criteria are presented and search-based approaches to achieving them are described. The paper also describes the first search-based approach to the generation of mutant-killing test data, addressing a fundamental limitation of mutation testing. Some problems remain whatever the level of abstraction considered. In particular, complexity introduced by the presence of persistent state when generating test sequences is as much a challenge at the Simulink model level as it has been found to be at the code level. The framework addresses this problem. Finally, a flexible approach to test sub-set extraction is presented, allowing testing resources to be deployed effectively and efficiently."
]
} |
1511.00412 | 2230378714 | This paper examines the verification of stability, a control requirement, over discrete control systems represented as Simulink diagrams, using different model checking approaches and tools. Model checking comprises the (exhaustive) exploration of a model of a system, to determine if a requirement is satisfied. If that is not the case, examples of the requirement's violation within the system's model are provided, as witnesses. These examples are potentially complementary to previous work on automatic theorem proving, when a system is not proven to be stable, but no proof of instability can be provided. We experimentally evaluated the suitability of four model checking approaches to verify stability on a set of benchmarks including linear and nonlinear, controlled and uncontrolled, discrete systems, via Lyapunov's second method or Lyapunov's direct method. Our study included symbolic, bounded, statistical and hybrid model checking, through the open-source tools NuSMV, UCLID, S-TaLiRo and SpaceEx, respectively. Our experiments and results provide an insight on the strengths and limitations of these model checking approaches for the verification of control requirements for discrete systems at Simulink level. We found that statistical model checking with S-TaLiRo is the most suitable option to complement our previous work on automatic theorem proving. | Theorem proving or deductive verification @cite_4 is a static verification technique that involves finding a mathematical proof of a requirement, through the application of axioms, lemmas and inference rules. A proof can be computed automatically via Satisfiability Modulo Theory (SMT) solvers or Satisfiability (SAT) solvers, or interactively (with user guidance), which requires a great degree of domain knowledge and expertise. A description of the system and requirements in Propositional, First-Order or Higher-Order Logic is required, along with any other relevant mathematical theory (e.g., sets, linear algebra). These definitions and additional information are normally encoded by hand into theories'', as required by case studies; they can be reused once embedded in the theorem proving tools. Theorem proving has been employed to verify functional equivalence between Simulink diagrams and auto-generated code (e.g., @cite_1 ), for data type checks (e.g., @cite_15 ) and to verify high-level requirements including stability (e.g., @cite_9 ). | {
"cite_N": [
"@cite_1",
"@cite_9",
"@cite_15",
"@cite_4"
],
"mid": [
"2121349597",
"2296004247",
"2019809768",
""
],
"abstract": [
"The design of control systems is usually based on diagrammatic definitions of control laws. The independent use of Z and CSP to verify their implementations has been successful, even for very large applications; high levels of automation have been achieved with tools based on a theorem prover called ProofPower. We have extended this approach to integrate the use of Z and CSP using a notation called Circus; as a result, we can handle a larger set of diagrams and prove more properties of the implementation. In this paper, we show how we can reuse the existing tools and experience to provide automation in the context of the new technique. This gives us confidence in its applicability in industry.",
"Simulink is an industrial de-facto standard for building executable models of control systems and their environments. Stateflow is a toolbox used to model reactive systems via hierarchical statecharts within a Simulink model, extending Simulink’s scope to event-driven and hybrid forms of embedded control. In safety-critical control systems, exhaustive coverage of system dynamics by formal verification would be desirable, being based on a formal semantics of combined Simulink Stateflow graphical models. In our previous work, we addressed the problem of formal verification of pure Simulink diagrams via an encoding into HCSP, a formal modelling language encoding hybrid system dynamics by means of an extension of CSP. In this paper, we extend the approach to cover Simulink models containing Stateflow components also. The transformation from Simulink Stateflow to HCSP is fully automatic, and the formal verification of the encoding is supported by a Hybrid Hoare Logic (HHL) prover based on Isabelle HOL. We demonstrate our approach by a real world case study originating from the Chinese High-speed Train Control System (CTCS-3).",
"Matlab Simulink? is a member of a class of visual languages that are used for modeling and simulating physical and cyber-physical system. A Simulink model consists of blocks with input and output ports connected using links that carry signals. We provide a contract-based type system of Simulink with annotations and dimensions units associated with ports and links. These contract types can capture invariants on signals as well as relations between signals. We define a contract-based verifier that checks the well formedness of Simulink blocks with respect to these contracts. This verifier generates proof obligations that are solved by SRI's Yices solver for satisfiability modulo theories (SMT). This translation can be used to detect basic type errors and violation of contracts, demonstrate counterexamples, generate test cases, or prove the absence of contract-based type errors. Our work is an initial step toward the symbolic analysis of Matlab Simulink models.",
""
]
} |
1511.00412 | 2230378714 | This paper examines the verification of stability, a control requirement, over discrete control systems represented as Simulink diagrams, using different model checking approaches and tools. Model checking comprises the (exhaustive) exploration of a model of a system, to determine if a requirement is satisfied. If that is not the case, examples of the requirement's violation within the system's model are provided, as witnesses. These examples are potentially complementary to previous work on automatic theorem proving, when a system is not proven to be stable, but no proof of instability can be provided. We experimentally evaluated the suitability of four model checking approaches to verify stability on a set of benchmarks including linear and nonlinear, controlled and uncontrolled, discrete systems, via Lyapunov's second method or Lyapunov's direct method. Our study included symbolic, bounded, statistical and hybrid model checking, through the open-source tools NuSMV, UCLID, S-TaLiRo and SpaceEx, respectively. Our experiments and results provide an insight on the strengths and limitations of these model checking approaches for the verification of control requirements for discrete systems at Simulink level. We found that statistical model checking with S-TaLiRo is the most suitable option to complement our previous work on automatic theorem proving. | Probabilistic model checking tools @cite_19 suit stochastic models such as Discrete Time Markov Chains. Specialist hybrid model checking tools -- for hybrid models comprising both discrete and continuous transitions, such as switched systems -- make use of geometrical methods to approximate the explored state space of the continuous transitions @cite_13 . Hybrid model checkers (and other verification techniques such as theorem proving) commonly restrict the continuous components to ordinary differential equations (ODE) with linear or affine forms. Reduction of the models can be achieved by systematic abstractions (e.g., bisimulations), or symmetry reduction techniques. | {
"cite_N": [
"@cite_19",
"@cite_13"
],
"mid": [
"589665961",
"2131441094"
],
"abstract": [
"In this paper we report on work in progress to extend the QuantUM approach to support the quantitative property analysis of Matlab Simulink Stateflow models. We propose a translation of Simulink Stateflow models to CTMCs which can be analyzed using the PRISM model checker inside the QuantUM tool. We also illustrate how the information needed to perform probabilistic analysis of dependability properties can be specified at the level of the Simulink Stateflow model. We demonstrate the applicability of our approach using a case study taken from the MathWorks examples library.",
"This paper concerns computational methods for verifying properties of polyhedral invariant hybrid automata (PIHA), which are hybrid automata with discrete transitions governed by polyhedral guards. To verify properties of the state trajectories for PIHA, the planar switching surfaces are partitioned to define a finite set of discrete states in an approximate quotient transition system (AQTS). State transitions in the AQTS are determined by the reachable states, or flow pipes, emitting from the switching surfaces according to the continuous dynamics. This paper presents a method for computing polyhedral approximations to flow pipes. It is shown that the flow-pipe approximation error can be made arbitrarily small for general nonlinear dynamics and that the computations can be made more efficient for affine systems. The paper also describes CheckMate, a MATLAB-based tool for modeling, simulating and verifying properties of hybrid systems based on the computational methods previously described."
]
} |
1511.00412 | 2230378714 | This paper examines the verification of stability, a control requirement, over discrete control systems represented as Simulink diagrams, using different model checking approaches and tools. Model checking comprises the (exhaustive) exploration of a model of a system, to determine if a requirement is satisfied. If that is not the case, examples of the requirement's violation within the system's model are provided, as witnesses. These examples are potentially complementary to previous work on automatic theorem proving, when a system is not proven to be stable, but no proof of instability can be provided. We experimentally evaluated the suitability of four model checking approaches to verify stability on a set of benchmarks including linear and nonlinear, controlled and uncontrolled, discrete systems, via Lyapunov's second method or Lyapunov's direct method. Our study included symbolic, bounded, statistical and hybrid model checking, through the open-source tools NuSMV, UCLID, S-TaLiRo and SpaceEx, respectively. Our experiments and results provide an insight on the strengths and limitations of these model checking approaches for the verification of control requirements for discrete systems at Simulink level. We found that statistical model checking with S-TaLiRo is the most suitable option to complement our previous work on automatic theorem proving. | The absence of runtime errors (or low-level requirements) such as overflows or arrays out of bounds for fixed data widths, was verified in @cite_7 using model checking for Simulink diagrams. Other tools, such as Mathwork's Polyspace, translate the Simulink diagrams into code before checking for runtime errors. Higher-level requirements in terms of safety and liveness have been verified directly in the Simulink models (e.g., the Prover Plug-In or CheckMate for hybrid systems @cite_13 ), after translating the models (or parts of them) into the language of a specific model checker @cite_17 @cite_19 , or after translating the Simulink diagrams into code @cite_18 @cite_20 . Since model checking is based on the exploration of finite-state decidable models, which implies discretization and abstraction processes over the original systems, formalized translation'' processes are highly desirable. We explored available translation semantics from Simulink to NuSMV @cite_8 and to UCLID @cite_7 . | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_19",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"176400115",
"2105300531",
"2171520043",
"589665961",
"2131441094",
"2315317710",
"2401986398"
],
"abstract": [
"Embedded systems have become an inevitable part of control systems in many industrial domains including avionics. The nature of this domain traditionally requires the highest possible degree of system availability and integrity. While embedded systems have become extremely complex and they have been continuously replacing legacy mechanical components, the amount of defects of hardware and software has to be kept to absolute minimum to avoid casualties and material damages. Despite the above-mentioned facts, significant improvements are still required in the validation and verification processes accompanying embedded systems development. In this paper we report on integration of a parallel, explicit-state LTL model checker (DiVinE) and a tool for requirements-based verification of aerospace system components (HiLiTE, a tool implemented and used by Honeywell). HiLiTE and the proposed partial toolchain use MATLAB Simulink Stateflow as the primary design language. The work has been conducted within the Artemis project industrial Framework for Embedded Systems Tools (iFEST).",
"Matlab Simulink is widely used for model-based development of embedded systems. In particular, safety-critical applications are increasingly designed in Matlab Simulink. At the same time, formal verification techniques for Matlab Simulink are still rare and existing ones do not scale well. In this paper, we present an automatic transformation from discrete-time Matlab Simulink to the input language of UCLID. UCLID is a toolkit for system verification based on SMT solving. Our approach enables us to use a combination of bounded model checking and inductive invariant checking for the automatic verification of Matlab Simulink models. To demonstrate the practical applicability of our approach, we have successfully verified the absence of one of the most common errors, i. e. variable over- or underflow, for an industrial design from the automotive domain.",
"Model Based Development (MBD) using Mathworks tools like Simulink, Stateflow etc. is being pursued in Honeywell for the development of safety critical avionics software. Formal verification techniques are well-known to identify design errors of safety critical systems reducing development cost and time. As of now, formal verification of Simulink design models is being carried out manually resulting in excessive time consumption during the design phase. We present a tool that automatically translates certain Simulink models into input language of a suitable model checker. Formal verification of safety critical avionics components becomes faster and less error prone with this tool. Support is also provided for reverse translation of traces violating requirements (as given by the model checker) into Simulink notation for playback.",
"In this paper we report on work in progress to extend the QuantUM approach to support the quantitative property analysis of Matlab Simulink Stateflow models. We propose a translation of Simulink Stateflow models to CTMCs which can be analyzed using the PRISM model checker inside the QuantUM tool. We also illustrate how the information needed to perform probabilistic analysis of dependability properties can be specified at the level of the Simulink Stateflow model. We demonstrate the applicability of our approach using a case study taken from the MathWorks examples library.",
"This paper concerns computational methods for verifying properties of polyhedral invariant hybrid automata (PIHA), which are hybrid automata with discrete transitions governed by polyhedral guards. To verify properties of the state trajectories for PIHA, the planar switching surfaces are partitioned to define a finite set of discrete states in an approximate quotient transition system (AQTS). State transitions in the AQTS are determined by the reachable states, or flow pipes, emitting from the switching surfaces according to the continuous dynamics. This paper presents a method for computing polyhedral approximations to flow pipes. It is shown that the flow-pipe approximation error can be made arbitrarily small for general nonlinear dynamics and that the computations can be made more efficient for affine systems. The paper also describes CheckMate, a MATLAB-based tool for modeling, simulating and verifying properties of hybrid systems based on the computational methods previously described.",
"",
"We report the results obtained during the verification of Autosub6000, an autonomous underwater vehicle used for deep oceanic exploration. Our starting point is the Simulink Matlab engineering model of the submarine, which is discretised by a compiler into a representation suitable for model checking. We assess the ability of the vehicle to function under degraded conditions by injecting faults automatically into the discretised model. The resulting system is analysed by means of the model checker MCMAS, and conclusions are drawn on the system's ability to withstand faults and to perform self-diagnosis and recovery. We present lessons learnt from this and suggest a general method for verifying autonomous vehicles."
]
} |
1510.09005 | 2235281308 | For the last few years, the amount of data has significantly increased in the companies. It is the reason why data analysis methods have to evolve to meet new demands. In this article, we introduce a practical analysis of a large database from a telecommunication operator. The problem is to segment a territory and characterize the retrieved areas owing to their inhabitant behavior in terms of mobile telephony. We have call detail records collected during five months in France. We propose a two stages analysis. The first one aims at grouping source antennas which originating calls are similarly distributed on target antennas and conversely for target antenna w.r.t. source antenna. A geographic projection of the data is used to display the results on a map of France. The second stage discretizes the time into periods between which we note changes in distributions of calls emerging from the clusters of source antennas. This enables an analysis of temporal changes of inhabitants behavior in every area of the country. | Numerous methods have been proposed to extract satisfactory clusters of vertices. Some of them @cite_20 are based on the optimization of criteria that favor partitions with homogeneous blocks, especially with pure zero-blocks as recommended in @cite_11 . More recent deterministic approaches have focused on optimizing criteria that quantify how well the co-clustering summarizes the input data @cite_19 (see e.g. @cite_17 for details on such criteria). Other approaches include blockmodeling. In those generative models, a latent cluster indicator variable is associated to each vertex. Conditionally to the latent variables, the probability of observing an edge between two actors follows some standard distribution (a Bernoulli distribution in the simplest case) whose parameters only depend on the pair of clusters designated by the latent variables. In early approaches, the number of clusters is chosen by the user @cite_15 . More recent techniques automatically determine the number of clusters using a Dirichlet Process @cite_13 . Finally, some recent approaches consider non-boolean latent variables: cluster assignments are not strong and a vertex has an affiliation degree to each cluster @cite_5 . | {
"cite_N": [
"@cite_11",
"@cite_19",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"2132092465",
"1976412347",
"2107107106",
"2144799688",
"2097266862",
"1981385379",
""
],
"abstract": [
"Networks of several distinct types of social tie are aggregated by a dual model that partitions a population while simultaneously identifying patterns of relations. Concepts and algorithms are demonstrated in five case studies involving up to 100 persons and up to eight types of tie, over as many as 15 time periods. In each case the model identifies a concrete social structure. Role and position concepts are then identified and interpreted in terms of these new models of concrete social structure. Part II, to be published in the May issue of this Journal (Boorman and White 1976), will show how the operational meaning of role structures in small populations can be generated from the sociometric blockmodels of Part I.",
"We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.",
"Consider data consisting of pairwise measurements, such as presence or absence of links between pairs of objects. These data arise, for instance, in the analysis of protein interactions and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing pairwise measurements with probabilistic models requires special assumptions, since the usual independence or exchangeability assumptions no longer hold. Here we introduce a class of variance allocation models for pairwise measurements: mixed membership stochastic blockmodels. These models combine global parameters that instantiate dense patches of connectivity (blockmodel) with local parameters that instantiate node-specific variability in the connections (mixed membership). We develop a general variational inference algorithm for fast approximate posterior inference. We demonstrate the advantages of mixed membership stochastic blockmodels with applications to social networks and protein interaction networks.",
"A statistical approach to a posteriori blockmodeling for digraphs and valued digraphs is proposed. The probability model assumes that the vertices of the digraph are partitioned into several unobserved (latent) classes and that the probability distribution of the relation between two vertices depends only on the classes to which they belong. A Bayesian estimator based on Gibbs sampling is proposed. The basic model is not identified, because class labels are arbitrary. The resulting identifiability problems are solved by restricting inference to the posterior distributions of invariant functions of the parameters and the vertex class membership. In addition, models are considered where class labels are identified by prior distributions for the class membership of some of the vertices. The model is illustrated by an example from the social networks literature (Kapferer's tailor shop).",
"Relationships between concepts account for a large proportion of semantic knowledge. We present a nonparametric Bayesian model that discovers systems of related concepts. Given data involving several sets of entities, our model discovers the kinds of entities in each set and the relations between kinds that are possible or likely. We apply our approach to four problems: clustering objects and features, learning ontologies, discovering kinship systems, and discovering structure in political data.",
"Abstract We extend the direct approach for blockmodeling one-mode data to two-mode data. The key idea in this development is that the rows and columns are partitioned simultaneously but in different ways. Many (but not all) of the generalized block types can be mobilized in blockmodeling two-mode network data. These methods were applied to some ‘voting’ data from the 2000–2001 term of the Supreme Court and to the classic Deep South data on women attending events. The obtained partitions are easy to interpret and compelling. The insight that rows and columns can be partitioned in different ways can be applied also to one-mode data. This is illustrated by a partition of a journal-to-journal citation network where journals are viewed simultaneously as both producers and consumers of scientific knowledge.",
""
]
} |
1510.09005 | 2235281308 | For the last few years, the amount of data has significantly increased in the companies. It is the reason why data analysis methods have to evolve to meet new demands. In this article, we introduce a practical analysis of a large database from a telecommunication operator. The problem is to segment a territory and characterize the retrieved areas owing to their inhabitant behavior in terms of mobile telephony. We have call detail records collected during five months in France. We propose a two stages analysis. The first one aims at grouping source antennas which originating calls are similarly distributed on target antennas and conversely for target antenna w.r.t. source antenna. A geographic projection of the data is used to display the results on a map of France. The second stage discretizes the time into periods between which we note changes in distributions of calls emerging from the clusters of source antennas. This enables an analysis of temporal changes of inhabitants behavior in every area of the country. | In addition to the diversity of structures that can be inferred from the network, co-clustering approaches are also able to deal with continuous variables @cite_21 , @cite_14 . Blocks are extracted from the data that yields a discretization of the continuous variables. For a further analysis, we are able to track temporal patterns: the source antennas are still the rows in the data matrix while the columns now model the time. | {
"cite_N": [
"@cite_14",
"@cite_21"
],
"mid": [
"1996619541",
"2160753225"
],
"abstract": [
"In this paper, we present a novel way of analyzing and summarizing a collection of curves, based on piecewise constant density estimation. The curves are partitioned into clusters, and the dimensions of the curves points are discretized into intervals. The cross-product of these univariate partitions forms a data grid of cells, which represents a nonparametric estimator of the joint density of the curves and point dimensions. The best model is selected using a Bayesian model selection approach and retrieved using combinatorial optimization algorithms. The proposed method requires no parameter setting and makes no assumption regarding the curves; beyond functional data, it can be applied to distributional data. The practical interest of the approach for functional data and distributional data exploratory analysis is presented on two real world datasets.",
"The co-clustering consists in reorganizing a data matrix into homogeneous blocks by considering simultaneously the sets of rows and columns. Setting this aim in model-based clustering, adapted block latent models were proposed for binary data and co-occurrence matrix. Regarding continuous data, the latent block model is not appropriated in many cases. As non-negative matrix factorization, it treats symmetrically the two sets, and the estimation of associated parameters requires a variational approximation. In this paper we focus on continuous data matrix without restriction to non negative matrix. We propose a parsimonious mixture model allowing to overcome the limits of the latent block model."
]
} |
1510.09142 | 1906772730 | We present a unified framework for learning continuous control policies using backpropagation. It supports stochastic control by treating stochasticity in the Bellman equation as a deterministic function of exogenous noise. The product is a spectrum of general policy gradient algorithms that range from model-free methods with value functions to model-based methods without value functions. We use learned models but only require observations from the environment in- stead of observations from model-predicted trajectories, minimizing the impact of compounded model errors. We apply these algorithms first to a toy stochastic control problem and then to several physics-based control problems in simulation. One of these variants, SVG(1), shows the effectiveness of learning models, value functions, and policies simultaneously in continuous domains. | Writing the noise variables as exogenous inputs to the system to allow direct differentiation with respect to the system state (equation ) is a known device in control theory @cite_12 @cite_7 where the model is given analytically. The idea of using a model to optimize a parametric policy around real trajectories is presented heuristically in @cite_29 and @cite_17 for deterministic policies and models. Also in the limit of deterministic policies and models, the recursions we have derived in Algorithm reduce to those of @cite_14 . Werbos defines an actor-critic algorithm called Heuristic Dynamic Programming that uses a deterministic model to roll-forward one step to produce a state prediction that is evaluated by a value function @cite_1 . have used Gaussian process models to compute policy gradients that are sensitive to model-uncertainty @cite_24 , and have optimized impressive policies with the aid of a non-parametric trajectory optimizer and locally-linear models @cite_20 . Our work in contrast has focused on using global, neural network models conjoined to value function approximators. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_29",
"@cite_1",
"@cite_24",
"@cite_20",
"@cite_12",
"@cite_17"
],
"mid": [
"2002793865",
"1971934487",
"2138484437",
"92805021",
"2140135625",
"2121103318",
"658381347",
"2132602063"
],
"abstract": [
"We provide efficient algorithms to calculate first and second order gradients of the cost of a control law with respect to its parameters, to speed up policy optimization. We achieve robustness by simultaneously designing one control law for multiple models with potentially different model structures, which represent model uncertainty and unmodeled dynamics. Providing explicit examples of possible unmodeled dynamics during the control design process is easier for the designer and is more effective than providing simulated perturbations to increase robustness, as is currently done in machine learning. Our approach supports the design of deterministic nonlinear and time varying controllers for both deterministic and stochastic nonlinear and time varying systems, including policies with internal state such as observers or other state estimators. We highlight the benefit of control laws made up of collections of simple policies where only one component policy is active at a time. Controller optimization and learning is particularly fast and effective in this situation because derivatives are decoupled.",
"We describe an Adaptive Dynamic Programming algorithm VGL(λ) for learning a critic function over a large continuous state space. The algorithm, which requires a learned model of the environment, extends Dual Heuristic Dynamic Programming to include a bootstrapping parameter analogous to that used in the reinforcement learning algorithm TD(λ). We provide on-line and batch mode implementations of the algorithm, and summarise the theoretical relationships and motivations of using this method over its precursor algorithms Dual Heuristic Dynamic Programming and TD(λ). Experiments for control problems using a neural network and greedy policy are provided.",
"It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described. >",
"This chapter contains sections titled: Introduction and Overview, A Simple Two-Component Adaptive Critic Design, HDP and Dynamic Programming, Alternative Ways to Figure 3.2 in Adapting the Action Network, Alternatives to HDP in Adapting the Critic Network, Some Topics for Further Research, Equations and Code For Implementation, References",
"In this paper, we introduce PILCO, a practical, data-efficient model-based policy search method. PILCO reduces model bias, one of the key problems of model-based reinforcement learning, in a principled way. By learning a probabilistic dynamics model and explicitly incorporating model uncertainty into long-term planning, PILCO can cope with very little data and facilitates learning from scratch in only a few trials. Policy evaluation is performed in closed form using state-of-the-art approximate inference. Furthermore, policy gradients are computed analytically for policy improvement. We report unprecedented learning efficiency on challenging and high-dimensional control tasks.",
"We present a policy search method that uses iteratively refitted local linear models to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search to learn policies with an arbitrary parameterization. Our method fits time-varying linear dynamics models to speed up learning, but does not rely on learning a global model, which can be difficult when the dynamics are complex and discontinuous. We show that this hybrid approach requires many fewer samples than model-free methods, and can handle complex, nonsmooth dynamics that can pose a challenge for model-based techniques. We present experiments showing that our method can be used to learn complex neural network policies that successfully execute simulated robotic manipulation tasks in partially observed environments with numerous contact discontinuities and underactuation.",
"",
"In the model-based policy search approach to reinforcement learning (RL), policies are found using a model (or \"simulator\") of the Markov decision process. However, for high-dimensional continuous-state tasks, it can be extremely difficult to build an accurate model, and thus often the algorithm returns a policy that works in simulation but not in real-life. The other extreme, model-free RL, tends to require infeasibly large numbers of real-life trials. In this paper, we present a hybrid algorithm that requires only an approximate model, and only a small number of real-life trials. The key idea is to successively \"ground\" the policy evaluations using real-life trials, but to rely on the approximate model to suggest local changes. Our theoretical results show that this algorithm achieves near-optimal performance in the real system, even when the model is only approximate. Empirical results also demonstrate that---when given only a crude model and a small number of real-life trials---our algorithm can obtain near-optimal performance in the real system."
]
} |
1510.08564 | 2463852903 | Clarithmetics are number theories based on computability logic (see http: www.csc.villanova.edu japaridz CL ). Formulas of these theories represent interactive computational problems, and their "truth" is understood as existence of an algorithmic solution. Various complexity constraints on such solutions induce various versions of clarithmetic. The present paper introduces a parameterized schematic version CLA11(P1,P2,P3,P4). By tuning the three parameters P1,P2,P3 in an essentially mechanical manner, one automatically obtains sound and complete theories with respect to a wide range of target tricomplexity classes, i.e. combinations of time (set by P3), space (set by P2) and so called amplitude (set by P1) complexities. Sound in the sense that every theorem T of the system represents an interactive number-theoretic computational problem with a solution from the given tricomplexity class and, furthermore, such a solution can be automatically extracted from a proof of T. And complete in the sense that every interactive number-theoretic problem with a solution from the given tricomplexity class is represented by some theorem of the system. Furthermore, through tuning the 4th parameter P4, at the cost of sacrificing recursive axiomatizability but not simplicity or elegance, the above extensional completeness can be strengthened to intensional completeness, according to which every formula representing a problem with a solution from the given tricomplexity class is a theorem of the system. This article is published in two parts. The present Part I introduces the system and proves its completeness, while Part II is devoted to proving soundness. | The story of bounded arithmetic starts with Parikh's 1971 work @cite_34 , where the first system @math of bounded arithmetic was introduced. Paris and Wilkie, in @cite_1 and a series of other papers, advanced the study of @math and of how it relates to complexity theory. Interest towards the area dramatically intensified after the appearance of Buss' 1986 influential work @cite_32 , where systems of bounded arithmetic for polynomial hierarchy, polynomial space and exponential time were introduced. Clote and Takeuti @cite_4 , Cook and Nguyen @cite_13 and others introduced a host of theories related to other complexity classes. See @cite_14 @cite_13 @cite_18 @cite_2 for comprehensive surveys and discussions of this line of research. The treatment of bounded arithmetic found in @cite_13 , which uses the two-sorted vocabulary of Zambella @cite_24 , is among the newest. Just like the present paper, it offers a method for designing one's own system of bounded arithmetic for a spectrum of complexity classes within P. Namely, one only needs to add a single axiom to the base theory @math , where the axiom states the existence of a solution to a complete problem of the complexity class. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_1",
"@cite_32",
"@cite_24",
"@cite_2",
"@cite_34",
"@cite_13"
],
"mid": [
"1572451808",
"",
"2090101829",
"1572440476",
"1942012432",
"2128026324",
"1607926272",
"2040404640",
"1537230380"
],
"abstract": [
"Preliminaries.- A.- I: Arithmetic as Number Theory, Set Theory and Logic.- II: Fragments and Combinatorics.- B.- III: Self-Reference.- IV: Models of Fragments of Arithmetic.- C.- V: Bounded Arithmetic.- Bibliographical Remarks and Further Reading.- Index of Terms.- Index of Symbols.",
"",
"Abstract We define theories of bounded arithmetic, whose definable functions and relations are exactly those in certain complexity classes. Based on a recursion-theoretic characterization of NC in Clote (1988, 1990), the first-order theory TNC, whose principal axiom scheme is a form of short induction on notation for nondeterministic polynomial-time computable relations, has the property that those functions having nondeterministic polynomial-time graph Θ(x, y) such that TNC ⊢∀x ∃y Θ(x, y) are exactly the functions in NC, computable on a parallel random-access machine in polylogarithmic parallel time with a polynomial number of processors.0 We then define three theories of weak second-order arithmetic which respectively characterize relations in the classes of alternating logarithmic time, logspace and nondeterministic logspace.",
"",
"Intuitionistic theories IS 2 i of Bounded Arithmetic are introduced and it is shown that the definable functions of IS 2 i are precisely the □ i p functions of the polynomial hierarchy. This is an extension of earlier work on the classical Bounded Arithmetic and was first conjectured by S. Cook. In contrast to the classical theories of Bounded Arithmetic where Σ i b -definable functions are of interest, our results for intuitionistic theories concern all the definable functions.",
"",
"1. Introduction 2. Preliminaries 3. Basic complexity theory 4. Basic propositional logic 5. Basic bounded arithmetic 6. Definability of computations 7. Witnessing theorems 8. Definability and witnessing in second order theories 9. Translations of arithmetic formulas 10. Finite axiomatizability problem 11. Direct independence proofs 12. Bounds for constant-depth Frege systems 13. Bounds for Frege and extended Frege systems 14. Hard tautologies and optimal proof systems 15. Strength of bounded arithmetic References Index.",
"“From two integers k, l one passes immediately to k l ; this process leads in a few steps to numbers which are far larger than any occurring in experience, e.g., 67 (257729) . Intuitionism, like ordinary mathematics, claims that this number can be represented by an arabic numeral. Could not one press further the criticism which intuitionism makes of existential assertions and raise the question: What does it mean to claim the existence of an arabic numeral for the foregoing number, since in practice we are not in a position to obtain it?",
"This book treats bounded arithmetic and propositional proof complexity from the point of view of computational complexity. The first seven chapters include the necessary logical background for the material and are suitable for a graduate course. Associated with each of many complexity classes are both a two-sorted predicate calculus theory, with induction restricted to concepts in the class, and a propositional proof system. The result is a uniform treatment of many systems in the literature, including Buss's theories for the polynomial hierarchy and many disparate systems for complexity classes such as AC0, AC0(m), TC0, NC1, L, NL, NC, and P."
]
} |
1510.08756 | 2949278529 | This article presents a method to perform diffraction tomography in a standard microscope that includes an LED array for illumination. After acquiring a sequence of intensity-only images of a thick sample, a ptychography-based reconstruction algorithm solves for its unknown complex index of refraction across three dimensions. The experimental microscope demonstrates a spatial resolution of 0.39 @math m and an axial resolution of 3.7 @math m at the Nyquist-Shannon sampling limit (0.54 @math m and 5.0 @math m at the Sparrow limit, respectively), across a total imaging volume of 2.2 mm @math 2.2 mm @math 110 @math m. Unlike competing methods, the 3D tomograms presented in this article are continuous, quantitative, and formed without the need for interferometry or any moving parts. Wide field-of-view reconstructions of thick biological specimens demonstrate potential applications in pathology and developmental biology. | Here, we first outline a solid foundation for the application of ptychographic phase retrieval to DT. Unlike approaching the problem from a projection-based or multi-slice perspective, the framework of DT (under the first Born approximation) follows directly from the scalar wave equation. It offers a clear picture of achievable resolution in 3D, spells out sampling and data redundancy requirements for an accurate reconstruction, and presents a clear path forward for future extensions to account for multiple scattering @cite_6 . Furthermore, our method does not require the arbitrary assignment of the number slices in the 3D volume, or their location, or for us to select a particular order in which to address each slice as iterations proceed. Instead, it simply inserts the measured data into its appropriate location in 3D Fourier space and ensures phase consistency between each measurement, given a sufficient amount of data redundancy (just like ptychography). From the initial starting point of solving for the first term in the Born expansion, we aim this approach as a general framework to eventually solve the challenging problem of forming tomographic maps of volumetric samples, at sub-micrometer resolution, in the presence of significant scattering. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2003147468"
],
"abstract": [
"A new method, based on an iterative procedure, for solving the two-dimensional inverse scattering problem is presented. This method employs an equivalent Neumann series solution in each iteration step. The purpose of the algorithm is to provide a general method to solve the two-dimensional imaging problem when the Born and the Rytov approximations break down. Numerical simulations were calculated for several cases where the conditions for the first order Born approximation were not satisfied. The results show that in both high and low frequency cases, good reconstructed profiles and smoothed versions of the original profiles can be obtained for smoothly varying permittivity profiles (lossless) and discontinuous profiles (lossless), respectively. A limited number of measurements around the object at a single frequency with four to eight plane incident waves from different directions are used. The method proposed in this article could easily be applied to the three-dimensional inverse scattering problem, if computational resources are available."
]
} |
1510.08525 | 2235799563 | This paper presents an intelligent tutoring system, GeoTutor, for Euclidean Geometry that is automatically able to synthesize proof problems and their respective solutions given a geometric figure together with a set of properties true of it. GeoTutor can provide personalized practice problems that address student deficiencies in the subject matter. | Recently automatic problem generation has gained new interest with novel approaches in problem generation for natural deduction @cite_1 , algebraic proof problems @cite_15 , mathematical procedural problems @cite_14 , embedded systems @cite_5 , Geometry constructions @cite_17 , etc. All of them apply a similar technique: they first generalize an existing problem into a template, and then explore a space of possible solutions that fit this template. However, the specific approaches vary. build templates automatically, and do it semi-automatically, and write templates manually. In contrast to this line of work, we do not use any manually written templates. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_5",
"@cite_15",
"@cite_17"
],
"mid": [
"2124031246",
"1765180305",
"2011781104",
"2152570960",
"2115007594"
],
"abstract": [
"A key challenge in teaching a procedural skill is finding an effective progression of example problems that the learner can solve in order to internalize the procedure. In many learning domains, generation of such problems is typically done by hand and there are few tools to help automate this process. We reduce this effort by borrowing ideas from test input generation in software engineering. We show how we can use execution traces as a framework for abstracting the characteristics of a given procedure and defining a partial ordering that reflects the relative difficulty of two traces. We also show how we can use this framework to analyze the completeness of expert-designed progressions and fill in holes. Furthermore, we demonstrate how our framework can automatically synthesize new problems by generating large sets of problems for elementary and middle school mathematics and synthesizing hundreds of levels for a popular algebra-learning game. We present the results of a user study with this game confirming that our partial ordering can predict user evaluation of procedural difficulty better than baseline methods.",
"Natural deduction, which is a method for establishing validity of propositional type arguments, helps develop important reasoning skills and is thus a key ingredient in a course on introductory logic. We present two core components, namely solution generation and practice problem generation, for enabling computer-aided education for this important subject domain. The key enabling technology is use of an offline-computed data-structure called Universal Proof Graph (UPG) that encodes all possible applications of inference rules over all small propositions abstracted using their bitvector-based truth-table representation. This allows an efficient forward search for solution generation. More interestingly, this allows generating fresh practice problems that have given solution characteristics by performing a backward search in UPG. We obtained around 300 natural deduction problems from various textbooks. Our solution generation procedure can solve many more problems than the traditional forward-chaining based procedure, while our problem generation procedure can efficiently generate several variants with desired characteristics.",
"The advent of massively open online courses (MOOCs) poses several technical challenges for educators. One of these challenges is the need to automate, as much as possible, the generation of problems, creation of solutions, and grading, in order to deal with the huge number of students. We collectively refer to this challenge as automated exercise generation. In this paper, we present a step towards tackling this challenge for an embedded systems course. We present a template-based approach to classifying problems in a recent textbook by Lee and Seshia, and outline approaches to problem and solution generation based on mutation and satisfiability solving. Several directions for future work are also outlined.",
"We propose computer-assisted techniques for helping with pedagogy in Algebra. In particular, given a proof problem p (of the form Left-hand-side-term = Righthand-side-term), we show how to automatically generate problems that are similar to p. We believe that such a tool can be used by teachers in making examinations where they need to test students on problems similar to what they taught in class, and by students in generating practice problems tailored to their specific needs. Our first insight is that we can generalize p syntactically to a query Q that implicitly represents a set of problems [[Q]] (which includes p). Our second insight is that we can explore the space of problems [[Q]] automatically, use classical results from polynomial identity testing to generate only those problems in [[Q]] that are correct, and then use pruning techniques to generate only unique and interesting problems. Our third insight is that with a small amount of manual tuning on the query Q, the user can interactively guide the computer to generate problems of interest to her.We present the technical details of the above mentioned steps, and also describe a tool where these steps have been implemented. We also present an empirical evaluation on a wide variety of problems from various sub-fields of algebra including polynomials, trigonometry, calculus, determinants etc. Our tool is able to generate a rich corpus of similar problems from each given problem; while some of these similar problems were already present in the textbook, several were new!.",
"In this paper, we study the problem of automatically solving ruler compass based geometry construction problems. We first introduce a logic and a programming language for describing such constructions and then phrase the automation problem as a program synthesis problem. We then describe a new program synthesis technique based on three key insights: (i) reduction of symbolic reasoning to concrete reasoning (based on a deep theoretical result that reduces verification to random testing), (ii) extending the instruction set of the programming language with higher level primitives (representing basic constructions found in textbook chapters, inspired by how humans use their experience and knowledge gained from chapters to perform complicated constructions), and (iii) pruning the forward exhaustive search using a goal-directed heuristic (simulating backward reasoning performed by humans). Our tool can successfully synthesize constructions for various geometry problems picked up from high-school textbooks and examination papers in a reasonable amount of time. This opens up an amazing set of possibilities in the context of making classroom teaching interactive."
]
} |
1510.08282 | 2354445231 | Crowdsourcing has revolutionized the process of knowledge building on the web. Wikipedia and StackOverflow are witness to this uprising development. However, the dynamics behind the process of crowdsourcing in the domain of knowledge building is an area relatively unexplored. It has been observed that an ecosystem exists in the collaborative knowledge building environments (KBE), which puts users of a KBE into various categories based on their expertise. Classical cognitive theories indicate triggering among the knowledge units to be one of the most important reasons behind accelerated knowledge building in collaborative KBEs. We use the concept of ecosystem and the triggering phenomenon to highlight the necessity for the right mix of users in a KBE. We provide a hill climbing based algorithm which gives the ideal mixture of users in a KBE, given the amount of triggering that takes place among the users of various categories. The study will help the portal designers to accordingly build suitable crowdsourced environments. | A study which is conducted in the context of a right mix of users in the domain of problem solving has been performed by Scott hong2004groups, page2008difference . The author states that a group of randomly selected people outperforms a group of best performing people. This is due to the fact that these random people bring diverse knowledge into the system and hence are able to perform better while solving a problem. On contrary, the best performing agents bring in similar type of knowledge to the system and hence might not be able to solve the problem that well. However, Thompson thompson2014does came up with a counter paper to Scott's work recently claiming that his paper does not provide any foundation for the argument that diversity actually trumps ability. krause2011swarm argue that adding diversity to a group can be more advantageous than adding expertise to the group. Erickson et. al erickson2012hanging provide a framework to select the crowd matching organizational needs. The authors state that different tasks require different crowds with different skills and knowledge. Kobern et. al @cite_8 observe in their recent work that intelligently assigning tasks to the users significantly increases the value of a crowdsourced system. | {
"cite_N": [
"@cite_8"
],
"mid": [
"1786164014"
],
"abstract": [
"In crowdsourcing systems, the interests of contributing participants and system stakeholders are often not fully aligned. Participants seek to learn, be entertained, and perform easy tasks, which offer them instant gratification; system stakeholders want users to complete more difficult tasks, which bring higher value to the crowdsourced application. We directly address this problem by presenting techniques that optimize the crowdsourcing process by jointly maximizing the user longevity in the system and the true value that the system derives from user participation. We first present models that predict the \"survival probability\" of a user at any given moment, that is, the probability that a user will proceed to the next task offered by the system. We then leverage this survival model to dynamically decide what task to assign and what motivating goals to present to the user. This allows us to jointly optimize for the short term (getting difficult tasks done) and for the long term (keeping users engaged for longer periods of time). We show that dynamically assigning tasks significantly increases the value of a crowdsourcing system. In an extensive empirical evaluation, we observed that our task allocation strategy increases the amount of information collected by up to 117.8 . We also explore the utility of motivating users with goals. We demonstrate that setting specific, static goals can be highly detrimental to the long-term user participation, as the completion of a goal (e.g., earning a badge) is also a common drop-off point for many users. We show that setting the goals dynamically, in conjunction with judicious allocation of tasks, increases the amount of information collected by the crowdsourcing system by up to 249 , compared to the existing baselines that use fixed objectives."
]
} |
1510.08012 | 1836794077 | Structure-from-motion (SfM) largely relies on feature tracking. In image sequences, if disjointed tracks caused by objects moving in and out of the field of view, occasional occlusion, or image noise are not handled well, corresponding SfM could be affected. This problem becomes severer for large-scale scenes, which typically requires to capture multiple sequences to cover the whole scene. In this paper, we propose an efficient non-consecutive feature tracking framework to match interrupted tracks distributed in different subsequences or even in different videos. Our framework consists of steps of solving the feature “dropout” problem when indistinctive structures, noise or large image distortion exists, and of rapidly recognizing and joining common features located in different subsequences. In addition, we contribute an effective segment-based coarse-to-fine SfM algorithm for robustly handling large data sets. Experimental results on challenging video data demonstrate the effectiveness of the proposed system. | For video tracking, sequential matchers are used for establishing correspondences between consecutive frames. Kanade-Lucas-Tomasi (KLT) tracker @cite_9 @cite_0 is widely used for small baseline matching. Other methods detect image features and match them considering local image patches @cite_14 @cite_61 or advanced descriptors @cite_16 @cite_23 @cite_40 @cite_46 . | {
"cite_N": [
"@cite_61",
"@cite_14",
"@cite_9",
"@cite_0",
"@cite_40",
"@cite_23",
"@cite_46",
"@cite_16"
],
"mid": [
"2007079389",
"",
"2118877769",
"1674816034",
"2124404372",
"2177274842",
"1677409904",
"2151103935"
],
"abstract": [
"This paper presents a new real-time localization system for a mobile robot. We show that autonomous navigation is possible in outdoor situation with the use of a single camera and natural landmarks. To do that, we use a three step approach. In a learning step, the robot is manually guided on a path and a video sequence is recorded with a front looking camera. Then a structure from motion algorithm is used to build a 3D map from this learning sequence. Finally in the navigation step, the robot uses this map to compute its localization in real-time and it follows the learning path or a slightly different path if desired. The vision algorithms used for map building and localization are first detailed. Then a large part of the paper is dedicated to the experimental evaluation of the accuracy and robustness of our algorithms based on experimental data collected during two years in various environments.",
"",
"Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system.",
"",
"Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained.",
"In this paper, we compare the performance of descriptors computed for local interest regions, as, for example, extracted by the Harris-Affine detector [Mikolajczyk, K and Schmid, C, 2004]. Many different descriptors have been proposed in the literature. It is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [Belongie, S, et al, April 2002], steerable filters [Freeman, W and Adelson, E, Setp. 1991], PCA-SIFT [Ke, Y and Sukthankar, R, 2004], differential invariants [Koenderink, J and van Doorn, A, 1987], spin images [Lazebnik, S, et al, 2003], SIFT [Lowe, D. G., 1999], complex filters [Schaffalitzky, F and Zisserman, A, 2002], moment invariants [Van Gool, L, et al, 1996], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors.",
"In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance."
]
} |
1510.08012 | 1836794077 | Structure-from-motion (SfM) largely relies on feature tracking. In image sequences, if disjointed tracks caused by objects moving in and out of the field of view, occasional occlusion, or image noise are not handled well, corresponding SfM could be affected. This problem becomes severer for large-scale scenes, which typically requires to capture multiple sequences to cover the whole scene. In this paper, we propose an efficient non-consecutive feature tracking framework to match interrupted tracks distributed in different subsequences or even in different videos. Our framework consists of steps of solving the feature “dropout” problem when indistinctive structures, noise or large image distortion exists, and of rapidly recognizing and joining common features located in different subsequences. In addition, we contribute an effective segment-based coarse-to-fine SfM algorithm for robustly handling large data sets. Experimental results on challenging video data demonstrate the effectiveness of the proposed system. | Both the KLT tracker and invariant feature algorithms depend on modeling feature appearance, and can be distracted by occlusion, similar structures, and noise. Generally, sequential matchers are difficult to match non-consecutive frames under image transformation. Scale-invariant feature detection and matching algorithms @cite_16 @cite_46 are effective in matching images with large transformation. But they generally produce many short tracks in consecutive point tracking due primarily to the global indistinctiveness and feature dropout problems. In addition, invariant features are relatively sensitive to perspective distortion. Although variations, such as ASIFT @cite_28 , can improve matching performance under substantial viewpoint change, computation overhead increases owing to exhaustive viewpoint simulation. Cordes al @cite_53 proposed a memory-based tracking method to extend feature trajectories by matching each frame to its neighbors. However, if an object re-enters the field-of-view after a long period of time, the size of neighborhood windows has to be very large. Besides, multiple-video setting was not discussed. In contrast, our method can not only extend track lifetime but also efficiently match common feature tracks in different subsequences by iteratively matching overlapping frame pairs and refining match matrix. The computation complexity is linear to the number of overlapping frame pairs. | {
"cite_N": [
"@cite_28",
"@cite_46",
"@cite_16",
"@cite_53"
],
"mid": [
"2052094314",
"1677409904",
"2151103935",
"1925320057"
],
"abstract": [
"If a physical object has a smooth or piecewise smooth boundary, its images obtained by cameras in varying positions undergo smooth apparent deformations. These deformations are locally well approximated by affine transforms of the image plane. In consequence the solid object recognition problem has often been led back to the computation of affine invariant image local features. Such invariant features could be obtained by normalization methods, but no fully affine normalization method exists for the time being. Even scale invariance is dealt with rigorously only by the scale-invariant feature transform (SIFT) method. By simulating zooms out and normalizing translation and rotation, SIFT is invariant to four out of the six parameters of an affine transform. The method proposed in this paper, affine-SIFT (ASIFT), simulates all image views obtainable by varying the two camera axis orientation parameters, namely, the latitude and the longitude angles, left over by the SIFT method. Then it covers the other four parameters by using the SIFT method itself. The resulting method will be mathematically proved to be fully affine invariant. Against any prognosis, simulating all views depending on the two camera orientation parameters is feasible with no dramatic computational load. A two-resolution scheme further reduces the ASIFT complexity to about twice that of SIFT. A new notion, the transition tilt, measuring the amount of distortion from one view to another, is introduced. While an absolute tilt from a frontal to a slanted view exceeding 6 is rare, much higher transition tilts are common when two slanted views of an object are compared (see Figure hightransitiontiltsillustration). The attainable transition tilt is measured for each affine image comparison method. The new method permits one to reliably identify features that have undergone transition tilts of large magnitude, up to 36 and higher. This fact is substantiated by many experiments which show that ASIFT significantly outperforms the state-of-the-art methods SIFT, maximally stable extremal region (MSER), Harris-affine, and Hessian-affine.",
"In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"Common techniques in structure from motion do not explicitly handle foreground occlusions and disocclusions, leading to several trajectories of a single 3D point. Hence, different discontinued trajectories induce a set of (more inaccurate) 3D points instead of a single 3D point, so that it is highly desirable to enforce long continuous trajectories which automatically bridge occlusions after a re-identification step. The solution proposed in this paper is to connect features in the current image to trajectories which discontinued earlier during the tracking. This is done using a correspondence analysis which is designed for wide baselines and an outlier elimination strategy using the epipolar geometry. The reference to the 3D object points can be used as a new constraint in the bundle adjustment. The feature localization is done using the SIFT detector extended by a Gaussian approximation of the gradient image signal. This technique provides the robustness of SIFT coupled with increased localization accuracy. Our results show that the reconstruction can be drastically improved and the drift is reduced, especially in sequences with occlusions resulting from foreground objects. In scenarios with large occlusions, the new approach leads to reliable and accurate results while a standard reference method fails."
]
} |
1510.08012 | 1836794077 | Structure-from-motion (SfM) largely relies on feature tracking. In image sequences, if disjointed tracks caused by objects moving in and out of the field of view, occasional occlusion, or image noise are not handled well, corresponding SfM could be affected. This problem becomes severer for large-scale scenes, which typically requires to capture multiple sequences to cover the whole scene. In this paper, we propose an efficient non-consecutive feature tracking framework to match interrupted tracks distributed in different subsequences or even in different videos. Our framework consists of steps of solving the feature “dropout” problem when indistinctive structures, noise or large image distortion exists, and of rapidly recognizing and joining common features located in different subsequences. In addition, we contribute an effective segment-based coarse-to-fine SfM algorithm for robustly handling large data sets. Experimental results on challenging video data demonstrate the effectiveness of the proposed system. | @cite_34 proposed using dense 3D geometry information to extend SIFT features. In contrast, our method only uses sparse matches to estimate a set of homographies to represent scene motion, which also handles viewpoint change. It is general since geometry is not required. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2133605096"
],
"abstract": [
"The robust alignment of images and scenes seen from widely different viewpoints is an important challenge for camera and scene reconstruction. This paper introduces a novel class of viewpoint independent local features for robust registration and novel algorithms to use the rich information of the new features for 3D scene alignment and large scale scene reconstruction. The key point of our approach consists of leveraging local shape information for the extraction of an invariant feature descriptor. The advantages of the novel viewpoint invariant patch (VIP) are: that the novel features are invariant to 3D camera motion and that a single VIP correspondence uniquely defines the 3D similarity transformation between two scenes. In the paper we demonstrate how to use the properties of the VIPs in an efficient matching scheme for 3D scene alignment. The algorithm is based on a hierarchical matching method which tests the components of the similarity transformation sequentially to allow efficient matching and 3D scene alignment. We evaluate the novel features on real data with known ground truth information and show that the features can be used to reconstruct large scale urban scenes."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.