aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1611.05537
2951449363
We study the tandem duplication distance between binary sequences and their roots. In other words, the quantity of interest is the number of tandem duplication operations of the form @math , where @math and @math are sequences and @math , @math , and @math are their substrings, needed to generate a binary sequence of length @math starting from a square-free sequence from the set @math . This problem is a restricted case of finding the duplication deduplication distance between two sequences, defined as the minimum number of duplication and deduplication operations required to transform one sequence to the other. We consider both exact and approximate tandem duplications. For exact duplication, denoting the maximum distance to the root of a sequence of length @math by @math , we prove that @math . For the case of approximate duplication, where a @math -fraction of symbols may be duplicated incorrectly, we show that the maximum distance has a sharp transition from linear in @math to logarithmic at @math . We also study the duplication distance to the root for sequences with a given root and for special classes of sequences, namely, the de Bruijn sequences, the Thue-Morse sequence, and the Fibbonaci words. The problem is motivated by genomic tandem duplication mutations and the smallest number of tandem duplication events required to generate a given biological sequence.
Finally, we mention that the stochastic behavior of certain duplication systems has been studied in @cite_3 @cite_11 , and error-correcting codes for combating duplication errors have been introduced in @cite_12 .
{ "cite_N": [ "@cite_12", "@cite_3", "@cite_11" ], "mid": [ "2340242219", "2517778131", "" ], "abstract": [ "The ability to store data in the DNA of a living organism has applications in a variety of areas including synthetic biology and watermarking of patented genetically-modified organisms. Data stored in this medium is subject to errors arising from various mutations, such as point mutations, indels, and tandem duplication, which need to be corrected to maintain data integrity. In this paper, we provide error-correcting codes for errors caused by tandem duplications, which create a copy of a block of the sequence and insert it in a tandem manner, i.e., next to the original. In particular, we present a family of codes for correcting errors due to tandem-duplications of a fixed length and any number of errors. We also study codes for correcting tandem duplications of length up to a given constant k, where we are primarily focused on the cases of k = 2, 3.", "We study random string-duplication systems, called Polya string models, motivated by certain random mutation processes in the genome of living organisms. Unlike previous works that study the combinatorial capacity of string-duplication systems, or peripheral properties such as symbol frequency, this work provides exact capacity or bounds on it, for several probabilistic models. In particular, we give the exact capacity of the random tandem-duplication system, and the end-duplication system, and bound the capacity of the complement tandem-duplication system. Interesting connections are drawn between the former and the beta distribution common to population genetics, as well as between the latter system and signatures of random permutations.", "" ] }
1611.05418
2618481090
This paper proposes a new method, that we call VisualBackProp, for visualizing which sets of pixels of the input image contribute most to the predictions made by the convolutional neural network (CNN). The method heavily hinges on exploring the intuition that the feature maps contain less and less irrelevant information to the prediction decision when moving deeper into the network. The technique we propose was developed as a debugging tool for CNN-based systems for steering self-driving cars and is therefore required to run in real-time, i.e. it was designed to require less computations than a forward propagation. This makes the presented visualization method a valuable debugging tool which can be easily used during both training and inference. We furthermore justify our approach with theoretical arguments and theoretically confirm that the proposed method identifies sets of input pixels, rather than individual pixels, that collaboratively contribute to the prediction. Our theoretical findings stand in agreement with the experimental results. The empirical evaluation shows the plausibility of the proposed approach on the road video data as well as in other applications and reveals that it compares favorably to the layer-wise relevance propagation approach, i.e. it obtains similar visualization results and simultaneously achieves order of magnitude speed-ups.
A notable approach @cite_6 addressing the problem of understanding classification decisions by pixel-wise decomposition of non-linear classifiers proposes a methodology called layer-wise relevance propagation, where the prediction is back-propagated without using gradients such that the relevance of each neuron is redistributed to its predecessors through a particular message-passing scheme relying on the conservation principle. The stability of the method and the sensitivity to different settings of the conservation parameters was studied in the context of several deep learning models @cite_23 . The LRP technique was extended to Fisher Vector classifiers @cite_19 and also used to explain predictions of CNNs in NLP applications @cite_15 . An extensive comparison of LRP with other techniques, like the deconvolution method @cite_11 and the sensitivity-based approach @cite_16 , which we also discuss next in this section, using an evaluation based on region perturbation can be found in @cite_14 . This study reveals that LRP provides better explanation of the DNN classification decisions than considered competitors We thus chose LRP as a competitive technique to our method in the experimental section. .
{ "cite_N": [ "@cite_14", "@cite_6", "@cite_19", "@cite_23", "@cite_15", "@cite_16", "@cite_11" ], "mid": [ "2950860813", "1787224781", "2964231383", "2498056627", "1533957530", "2962851944", "2952186574" ], "abstract": [ "Deep Neural Networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multi-layer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the ''importance'' of individual pixels wrt the classification decision and allow a visualization in terms of a heatmap in pixel input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012 and MIT Places data sets. Our main result is that the recently proposed Layer-wise Relevance Propagation (LRP) algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of neural network performance.", "Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.", "Fisher vector (FV) classifiers and Deep Neural Networks (DNNs) are popular and successful algorithms for solving image classification problems. However, both are generally considered 'black box' predictors as the non-linear transformations involved have so far prevented transparent and interpretable reasoning. Recently, a principled technique, Layer-wise Relevance Propagation (LRP), has been developed in order to better comprehend the inherent structured reasoning of complex nonlinear classification models such as Bag of Feature models or DNNs. In this paper we (1) extend the LRP framework also for Fisher vector classifiers and then use it as analysis tool to (2) quantify the importance of context for classification, (3) qualitatively compare DNNs against FV classifiers in terms of important image regions and (4) detect potential flaws and biases in data. All experiments are performed on the PASCAL VOC 2007 and ILSVRC 2012 data sets.", "We present the application of layer-wise relevance propagation to several deep neural networks such as the BVLC reference neural net and googlenet trained on ImageNet and MIT Places datasets. Layer-wise relevance propagation is a method to compute scores for image pixels and image regions denoting the impact of the particular image region on the prediction of the classifier for one particular test image. We demonstrate the impact of different parameter settings on the resulting explanation.", "Machine-learned classifiers are important components of many data mining and knowledge discovery systems. In several application domains, an explanation of the classifier's reasoning is critical for the classifier's acceptance by the end-user. We describe a framework, ExplainD, for explaining decisions made by classifiers that use additive evidence. ExplainD applies to many widely used classifiers, including linear discriminants and many additive models. We demonstrate our ExplainD framework using implementations of naive Bayes, linear support vector machine, and logistic regression classifiers on example applications. ExplainD uses a simple graphical explanation of the classification process to provide visualizations of the classifier decisions, visualization of the evidence for those decisions, the capability to speculate on the effect of changes to the data, and the capability, wherever possible, to drill down and audit the source of the evidence. We demonstrate the effectiveness of ExplainD in the context of a deployed web-based system (Proteome Analyst) and using a downloadable Python-based implementation.", "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets." ] }
1611.05418
2618481090
This paper proposes a new method, that we call VisualBackProp, for visualizing which sets of pixels of the input image contribute most to the predictions made by the convolutional neural network (CNN). The method heavily hinges on exploring the intuition that the feature maps contain less and less irrelevant information to the prediction decision when moving deeper into the network. The technique we propose was developed as a debugging tool for CNN-based systems for steering self-driving cars and is therefore required to run in real-time, i.e. it was designed to require less computations than a forward propagation. This makes the presented visualization method a valuable debugging tool which can be easily used during both training and inference. We furthermore justify our approach with theoretical arguments and theoretically confirm that the proposed method identifies sets of input pixels, rather than individual pixels, that collaboratively contribute to the prediction. Our theoretical findings stand in agreement with the experimental results. The empirical evaluation shows the plausibility of the proposed approach on the road video data as well as in other applications and reveals that it compares favorably to the layer-wise relevance propagation approach, i.e. it obtains similar visualization results and simultaneously achieves order of magnitude speed-ups.
The fundamental difference between the LRP approach and the deconvolution method lies in how the responses are projected towards the inputs. The latter approach solves the optimization problems to reconstruct the image input while the former one aims to reconstruct the classifier decision (the details are well-explained in @cite_6 ).
{ "cite_N": [ "@cite_6" ], "mid": [ "1787224781" ], "abstract": [ "Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package." ] }
1611.05418
2618481090
This paper proposes a new method, that we call VisualBackProp, for visualizing which sets of pixels of the input image contribute most to the predictions made by the convolutional neural network (CNN). The method heavily hinges on exploring the intuition that the feature maps contain less and less irrelevant information to the prediction decision when moving deeper into the network. The technique we propose was developed as a debugging tool for CNN-based systems for steering self-driving cars and is therefore required to run in real-time, i.e. it was designed to require less computations than a forward propagation. This makes the presented visualization method a valuable debugging tool which can be easily used during both training and inference. We furthermore justify our approach with theoretical arguments and theoretically confirm that the proposed method identifies sets of input pixels, rather than individual pixels, that collaboratively contribute to the prediction. Our theoretical findings stand in agreement with the experimental results. The empirical evaluation shows the plausibility of the proposed approach on the road video data as well as in other applications and reveals that it compares favorably to the layer-wise relevance propagation approach, i.e. it obtains similar visualization results and simultaneously achieves order of magnitude speed-ups.
Guided backpropagation @cite_0 extends the deconvolution approach by combining it with a simple technique visualizing the part of the image that most activates a given neuron using a backward pass of the activation of a single neuron after a forward pass through the network. Finally, the recently published method @cite_18 based on the prediction difference analysis @cite_25 is a probabilistic approach that extends the idea in @cite_11 of visualizing the probability of the correct class using the occlusion of the parts of the image. The approach highlights the regions of the input image of a CNN which provide evidence for or against a certain class.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_25", "@cite_11" ], "mid": [ "2123045220", "2590082389", "1945616565", "2952186574" ], "abstract": [ "Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.", "This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a specific input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. It overcomes several shortcoming of previous methods and provides great additional insight into the decision making process of classifiers. Making neural network decisions interpretable through visualization is important both to improve models and to accelerate the adoption of black-box classifiers in application areas such as medicine. We illustrate the method in experiments on natural images (ImageNet data), as well as medical images (MRI brain scans).", "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets." ] }
1611.05418
2618481090
This paper proposes a new method, that we call VisualBackProp, for visualizing which sets of pixels of the input image contribute most to the predictions made by the convolutional neural network (CNN). The method heavily hinges on exploring the intuition that the feature maps contain less and less irrelevant information to the prediction decision when moving deeper into the network. The technique we propose was developed as a debugging tool for CNN-based systems for steering self-driving cars and is therefore required to run in real-time, i.e. it was designed to require less computations than a forward propagation. This makes the presented visualization method a valuable debugging tool which can be easily used during both training and inference. We furthermore justify our approach with theoretical arguments and theoretically confirm that the proposed method identifies sets of input pixels, rather than individual pixels, that collaboratively contribute to the prediction. Our theoretical findings stand in agreement with the experimental results. The empirical evaluation shows the plausibility of the proposed approach on the road video data as well as in other applications and reveals that it compares favorably to the layer-wise relevance propagation approach, i.e. it obtains similar visualization results and simultaneously achieves order of magnitude speed-ups.
Understanding CNNs can also be done by visualizing output units as distributions in the input space via output unit sampling @cite_20 . However, computing relevant statistics of the obtained distribution is often difficult. This technique cannot be applied to deep architectures based on auto-encoders as opposed to the subsequent work @cite_13 @cite_2 , where the authors visualize what is activated by the unit in an arbitrary layer of a CNN in the input space (of images) via an activation maximization procedure that looks for input patterns of a bounded norm that maximize the activation of a given hidden unit using gradient ascent. This method extends previous approaches @cite_4 . The gradient-based visualization method @cite_2 can also be viewed as a generalization of the deconvolutional network reconstruction procedure @cite_11 as shown in subsequent work @cite_16 . The requirement of careful initialization limits the method @cite_11 . The approach was applied to Stacked Denoising Auto-Encoders, Deep Belief Networks and later on to CNNs @cite_16 . Finally, sensitivity-based methods @cite_16 @cite_17 @cite_9 ) aim to understand how the classifier works in different parts of the input domain by computing scores based on partial derivatives at the given sample.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_17", "@cite_2", "@cite_16", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "1493372406", "2083844448", "", "", "2962851944", "2949987032", "2136922672", "2952186574" ], "abstract": [ "When training deep networks it is common knowledge that an efficient and well generalizing representation of the problem is formed. In this paper we aim to elucidate what makes the emerging representation successful. We analyze the layer-wise evolution of the representation in a deep network by building a sequence of deeper and deeper kernels that subsume the mapping performed by more and more layers of the deep network and measuring how these increasingly complex kernels fit the learning problem. We observe that deep networks create increasingly better representations of the learning problem and that the structure of the deep network controls how fast the representation of the task is formed layer after layer.", "Abstract Convinced by the predictive quality of artificial neural network (ANN) models in ecology, we have turned our interests to their explanatory capacities. Seven methods which can give the relative contribution and or the contribution profile of the input factors were compared: (i) the ‘PaD’ (for Partial Derivatives) method consists in a calculation of the partial derivatives of the output according to the input variables; (ii) the ‘Weights’ method is a computation using the connection weights; (iii) the ‘Perturb’ method corresponds to a perturbation of the input variables; (iv) the ‘Profile’ method is a successive variation of one input variable while the others are kept constant at a fixed value; (v) the ‘classical stepwise’ method is an observation of the change in the error value when an adding (forward) or an elimination (backward) step of the input variables is operated; (vi) ‘Improved stepwise a’ uses the same principle as the classical stepwise, but the elimination of the input occurs when the network is trained, the connection weights corresponding to the input variable studied is also eliminated; (vii) ‘Improved stepwise b’ involves the network being trained and fixed step by step, one input variable at its mean value to note the consequences on the error. The data tested in this study concerns the prediction of the density of brown trout spawning redds using habitat characteristics. The PaD method was found to be the most useful as it gave the most complete results, followed by the Profile method that gave the contribution profile of the input variables. The Perturb method allowed a good classification of the input parameters as well as the Weights method that has been simplified but these two methods lack stability. Next came the two improved stepwise methods (a and b) that both gave exactly the same result but the contributions were not sufficiently expressed. Finally, the classical stepwise methods gave the poorest results.", "", "", "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].", "Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.", "We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets." ] }
1611.05418
2618481090
This paper proposes a new method, that we call VisualBackProp, for visualizing which sets of pixels of the input image contribute most to the predictions made by the convolutional neural network (CNN). The method heavily hinges on exploring the intuition that the feature maps contain less and less irrelevant information to the prediction decision when moving deeper into the network. The technique we propose was developed as a debugging tool for CNN-based systems for steering self-driving cars and is therefore required to run in real-time, i.e. it was designed to require less computations than a forward propagation. This makes the presented visualization method a valuable debugging tool which can be easily used during both training and inference. We furthermore justify our approach with theoretical arguments and theoretically confirm that the proposed method identifies sets of input pixels, rather than individual pixels, that collaboratively contribute to the prediction. Our theoretical findings stand in agreement with the experimental results. The empirical evaluation shows the plausibility of the proposed approach on the road video data as well as in other applications and reveals that it compares favorably to the layer-wise relevance propagation approach, i.e. it obtains similar visualization results and simultaneously achieves order of magnitude speed-ups.
Some more recent gradient-based visualization techniques for CNN-based models not mentioned before include Grad-CAM @cite_27 , which is an extension of the Class Activation Mapping (CAM) method @cite_10 . The approach heavily relies on the construction of weighted sum of the feature maps, where the weights are global-average-pooled gradients obtained through back-propagation. The approach lacks the ability to show fine-grained importance like pixel-space gradient visualization methods @cite_0 @cite_11 and thus in practice has to be fused with these techniques to create high-resolution class-discriminative visualizations.
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_10", "@cite_11" ], "mid": [ "2123045220", "2530010084", "2950328304", "2952186574" ], "abstract": [ "Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.", "", "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets." ] }
1611.05241
2951505362
We propose a combinatorial solution for the problem of non-rigidly matching a 3D shape to 3D image data. To this end, we model the shape as a triangular mesh and allow each triangle of this mesh to be rigidly transformed to achieve a suitable matching to the image. By penalising the distance and the relative rotation between neighbouring triangles our matching compromises between image and shape information. In this paper, we resolve two major challenges: Firstly, we address the resulting large and NP-hard combinatorial problem with a suitable graph-theoretic approach. Secondly, we propose an efficient discretisation of the unbounded 6-dimensional Lie group SE(3). To our knowledge this is the first combinatorial formulation for non-rigid 3D shape-to-image matching. In contrast to existing local (gradient descent) optimisation methods, we obtain solutions that do not require a good initialisation and that are within a bound of the optimal solution. We evaluate the proposed method on the two problems of non-rigid 3D shape-to-shape and non-rigid 3D shape-to-image registration and demonstrate that it provides promising results.
In many scenarios it is natural to assume that image or shape deformations are spatially continuous and smooth. Frequently, such problems are formulated in terms of optimisation problems over the space of diffeomorphisms @cite_41 @cite_4 @cite_37 @cite_31 . Commonly, gradient descent-like methods are used to obtain (local) optima of the (typically non-convex) problems. However, a major shortcoming of these methods is that a good initial estimate is crucial and in general there are no bounds on the optimality of the solution. To deal with the non-convexity of a 2D shape-to-image matching problem that is formulated in terms of optimal transport, the authors in @cite_35 propose to use a branch and bound scheme.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_4", "@cite_41", "@cite_31" ], "mid": [ "2019588894", "2170167891", "1985415289", "", "2049114803" ], "abstract": [ "A functional for joint variational object segmentation and shape matching is developed. The formulation is based on optimal transport w.r.t. geometric distance and local feature similarity. Geometric invariance and modelling of object-typical statistical variations is achieved by introducing degrees of freedom that describe transformations and deformations of the shape template. The shape model is mathematically equivalent to contour-based approaches but inference can be performed without conversion between the contour and region representations, allowing combination with other convex segmentation approaches and simplifying optimization. While the overall functional is non-convex, non-convexity is confined to a low-dimensional variable. We propose a locally optimal alternating optimization scheme and a globally optimal branch and bound scheme, based on adaptive convex relaxation. Combining both methods allows to eliminate the delicate initialization problem inherent to many contour based approaches while remaining computationally practical. The properties of the functional, its ability to adapt to a wide range of input data structures and the different optimization schemes are illustrated and compared by numerical experiments.", "This paper examine the Euler-Lagrange equations for the solution of the large deformation diffeomorphic metric mapping problem studied in (1998) and Trouve (1995) in which two images I 0, I 1 are given and connected via the diffeomorphic change of coordinates I 0???1=I 1 where ?=?1 is the end point at t= 1 of curve ? t , t?[0, 1] satisfying .? t =v t (? t ), t? [0,1] with ?0=id. The variational problem takes the form @math where ?v t? V is an appropriate Sobolev norm on the velocity field v t(·), and the second term enforces matching of the images with ?·?L 2 representing the squared-error norm. In this paper we derive the Euler-Lagrange equations characterizing the minimizing vector fields v t, t?[0, 1] assuming sufficient smoothness of the norm to guarantee existence of solutions in the space of diffeomorphisms. We describe the implementation of the Euler equations using semi-lagrangian method of computing particle flows and show the solutions for various examples. As well, we compute the metric distance on several anatomical configurations as measured by ?0 1?v t? V dt on the geodesic shortest paths.", "We study some Riemannian metrics on the space of regular smooth curves in the plane, viewed as the orbit space of maps from the circle to the plane modulo the group of diffeomorphisms of the circle, acting as reparameterizations. In particular we investigate the L^2 inner product with respect to 1 plus curvature squared times arclength as the measure along a curve, applied to normal vector field to the curve. The curvature squared term acts as a sort of geometric Tikhonov regularization because, without it, the geodesic distance between any 2 distinct curves is 0, while in our case the distance is always positive. We give some lower bounds for the distance function, derive the geodesic equation and the sectional curvature, solve the geodesic equation with simple endpoints numerically, and pose some open questions. The space has an interesting split personality: among large smooth curves, all its sectional curvatures are positive or 0, while for curves with high curvature or perturbations of high frequency, the curvatures are negative.", "", "Studying large deformations with a Riemannian approach has been an efficient point of view to generate metrics between deformable objects, and to provide accurate, non ambiguous and smooth matchings between images. In this paper, we study the geodesics of such large deformation diffeomorphisms, and more precisely, introduce a fundamental property that they satisfy, namely the conservation of momentum. This property allows us to generate and store complex deformations with the help of one initial \"momentum\" which serves as the initial state of a differential equation in the group of diffeomorphisms. Moreover, it is shown that this momentum can be also used for describing a deformation of given visual structures, like points, contours or images, and that, it has the same dimension as the described object, as a consequence of the normal momentum constraint we introduce." ] }
1611.05241
2951505362
We propose a combinatorial solution for the problem of non-rigidly matching a 3D shape to 3D image data. To this end, we model the shape as a triangular mesh and allow each triangle of this mesh to be rigidly transformed to achieve a suitable matching to the image. By penalising the distance and the relative rotation between neighbouring triangles our matching compromises between image and shape information. In this paper, we resolve two major challenges: Firstly, we address the resulting large and NP-hard combinatorial problem with a suitable graph-theoretic approach. Secondly, we propose an efficient discretisation of the unbounded 6-dimensional Lie group SE(3). To our knowledge this is the first combinatorial formulation for non-rigid 3D shape-to-image matching. In contrast to existing local (gradient descent) optimisation methods, we obtain solutions that do not require a good initialisation and that are within a bound of the optimal solution. We evaluate the proposed method on the two problems of non-rigid 3D shape-to-shape and non-rigid 3D shape-to-image registration and demonstrate that it provides promising results.
In contrast to the continuous local optimisation methods, many vision problems can be formulated in a discrete manner such that they are amenable to solutions based on graph algorithms and dynamic programming (DP) @cite_29 . Since curves are intrinsically one-dimensional, various curve matching formulations can also be reduced to finding a shortest-path in a particular graph. Moreover, based on a recursive formulation using easier-to-solve subproblems, matching problems with templates that have a tree structure can frequently be tackled by DP. For a deformable matching of an open contour to a 2D image, a global solution based on DP has been proposed in @cite_49 . Also based on DP, in @cite_3 the authors present a method for solving the problem of deformably matching a 2D polygon to a 2D image for chordal graph polygons. In @cite_60 , the authors propose a globally optimal approach for matching a closed contour to a 2D image based on cycles in a product graph of the contour and the image. A related formulation that is also based on a product graph has recently been introduced in @cite_12 for deformable contour to 3D shape matching.
{ "cite_N": [ "@cite_60", "@cite_29", "@cite_3", "@cite_49", "@cite_12" ], "mid": [ "2150149968", "2168311572", "2173414649", "2051800765", "" ], "abstract": [ "We propose a combinatorial solution to determine the optimal elastic matching of a deformable template to an image. The central idea is to cast the optimal matching of each template point to a corresponding image pixel as a problem of finding a minimum cost cyclic path in the three-dimensional product space spanned by the template and the input image. We introduce a cost functional associated with each cycle, which consists of three terms: a data fidelity term favoring strong intensity gradients, a shape consistency term favoring similarity of tangent angles of corresponding points, and an elastic penalty for stretching or shrinking. The functional is normalized with respect to the total length to avoid a bias toward shorter curves. Optimization is performed by Lawler's Minimum Ratio Cycle algorithm parallelized on state-of-the-art graphics cards. The algorithm provides the optimal segmentation and point correspondence between template and segmented curve in computation times that are essentially linear in the number of pixels. To the best of our knowledge, this is the only existing globally optimal algorithm for real-time tracking of deformable shapes.", "Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.", "We describe some techniques that can be used to represent and detect deformable shapes in images. The main difficulty with deformable template models is the very large or infinite number of possible nonrigid transformations of the templates. This makes the problem of finding an optimal match of a deformable template to an image incredibly hard. Using a new representation for deformable shapes, we show how to efficiently find a global optimal solution to the nonrigid matching problem. The representation is based on the description of objects using triangulated polygons. Our matching algorithm can minimize a large class of energy functions, making it applicable to a wide range of problems. We present experimental results of detecting shapes in medical images and images of natural scenes. We also consider the problem of learning a nonrigid shape model for a class of objects from examples. We show how to learn good models while constraining them to be in the form required by the matching algorithm.", "A novel deformable template is presented which detects the boundary of an open hand in a grayscale image without initialization by the user. A dynamic programming algorithm enhanced by pruning techniques finds the hand contour in the image in as little as 19 s on a Pentium 150 MHz. The template is translation- and rotation-invariant and accomodates shape deformation, significant occlusion and background clutter, and the presence of multiple hands.", "" ] }
1611.05241
2951505362
We propose a combinatorial solution for the problem of non-rigidly matching a 3D shape to 3D image data. To this end, we model the shape as a triangular mesh and allow each triangle of this mesh to be rigidly transformed to achieve a suitable matching to the image. By penalising the distance and the relative rotation between neighbouring triangles our matching compromises between image and shape information. In this paper, we resolve two major challenges: Firstly, we address the resulting large and NP-hard combinatorial problem with a suitable graph-theoretic approach. Secondly, we propose an efficient discretisation of the unbounded 6-dimensional Lie group SE(3). To our knowledge this is the first combinatorial formulation for non-rigid 3D shape-to-image matching. In contrast to existing local (gradient descent) optimisation methods, we obtain solutions that do not require a good initialisation and that are within a bound of the optimal solution. We evaluate the proposed method on the two problems of non-rigid 3D shape-to-shape and non-rigid 3D shape-to-image registration and demonstrate that it provides promising results.
Labelling problems are ubiquitous in computer vision and appear both in continuous and discrete settings @cite_17 . The popular Markov Random Field (MRF) framework offers a Bayesian treatment thereof @cite_18 . Also, linear programming relaxations of MRFs have been studied @cite_14 . The continuous approaches to multi-labelling include various convex relaxations @cite_38 @cite_5 @cite_56 @cite_20 , multi-labelling problems with total variation regularisation of functions with values on manifolds @cite_46 , as well as sublabel-accurate convex relaxations @cite_9 @cite_36 . Among the discrete multi-labelling methods are the previously-mentioned graph-cuts, which can be used to find global solutions for certain binary labelling problems, including problems with submodular pairwise costs @cite_44 . For a sub-class of multi-labelling problems a global solution can also be found @cite_44 . This sub-class includes pairwise costs that are convex in terms of totally ordered labels @cite_25 . In addition, efficient algorithms for finding local optima of general multi-labelling problems have been proposed @cite_26 @cite_6 , which even have theoretical optimality guarantees. A more detailed description of the energy functions that can be optimised using graph-cuts is given in @cite_33 @cite_55 @cite_44 .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_14", "@cite_26", "@cite_33", "@cite_36", "@cite_9", "@cite_55", "@cite_6", "@cite_56", "@cite_44", "@cite_5", "@cite_46", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2111957738", "", "2135414191", "2143516773", "2101309634", "", "2262594169", "2106569125", "2119867409", "2022767198", "2153396823", "2140843581", "2123513798", "", "", "2128121028" ], "abstract": [ "We propose a spatially continuous formulation of Ishikawa's discrete multi-label problem. We show that the resulting non-convex variational problem can be reformulated as a convex variational problem via embedding in a higher dimensional space. This variational problem can be interpreted as a minimal surface problem in an anisotropic Riemannian space. In several stereo experiments we show that the proposed continuous formulation is superior to its discrete counterpart in terms of computing time, memory efficiency and metrication errors.", "", "The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review 's upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.", "Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. A common constraint is that the labels should vary smoothly almost everywhere while preserving sharp discontinuities that may exist, e.g., at object boundaries. These tasks are naturally stated in terms of energy minimization. The authors consider a wide class of energies with various smoothness constraints. Global minimization of these energy functions is NP-hard even in the simplest discontinuity-preserving case. Therefore, our focus is on efficient approximation algorithms. We present two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves. These moves can simultaneously change the labels of arbitrarily large sets of pixels. In contrast, many standard algorithms (including simulated annealing) use small moves where only one pixel changes its label at a time. Our expansion algorithm finds a labeling within a known factor of the global minimum, while our swap algorithm handles more general energy functions. Both of these algorithms allow important cases of discontinuity preserving energies. We experimentally demonstrate the effectiveness of our approach for image restoration, stereo and motion. On real data with ground truth, we achieve 98 percent accuracy.", "In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a general-purpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available.", "", "We propose a novel spatially continuous framework for convex relaxations based on functional lifting. Our method can be interpreted as a sublabel-accurate solution to multilabel problems. We show that previously proposed functional lifting methods optimize an energy which is linear between two labels and hence require (often infinitely) many labels for a faithful approximation. In contrast, the proposed formulation is based on a piecewise convex approximation and therefore needs far fewer labels. In comparison to recent MRF-based approaches, our method is formulated in a spatially continuous setting and shows less grid bias. Moreover, in a local sense, our formulation is the tightest possible convex relaxation. It is easy to implement and allows an efficient primal-dual optimization on GPUs. We show the effectiveness of our approach on several computer vision problems.", "In the work of the authors (2003), we showed that graph cuts can find hypersurfaces of globally minimal length (or area) under any Riemannian metric. Here we show that graph cuts on directed regular grids can approximate a significantly more general class of continuous non-symmetric metrics. Using submodularity condition (Boros and Hammer, 2002 and Kolmogorov and Zabih, 2004), we obtain a tight characterization of graph-representable metrics. Such \"submodular\" metrics have an elegant geometric interpretation via hypersurface functionals combining length area and flux. Practically speaking, we attend 'geo-cuts' algorithm to a wider class of geometrically motivated hypersurface functionals and show how to globally optimize any combination of length area and flux of a given vector field. The concept of flux was recently introduced into computer vision by Vasilevskiy and Siddiqi (2002) but it was mainly studied within variational framework so far. We are first to show that flux can be integrated into graph cuts as well. Combining geometric concepts of flux and length area within the global optimization framework of graph cuts allows principled discrete segmentation models and advances the slate of the art for the graph cuts methods in vision. In particular we address the \"shrinking\" problem of graph cuts, improve segmentation of long thin objects, and introduce useful shape constraints.", "A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov random fields (MRFs) that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of state-of-the-art techniques like the alpha-expansion algorithm, which is included merely as a special case. Moreover, contrary to alpha-expansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, for example, even for MRFs with nonmetric potentials. In addition, they are capable of providing per-instance suboptimality bounds in all occasions, including discrete MRFs with an arbitrary potential function. These bounds prove to be very tight in practice (that is, very close to 1), which means that the resulting solutions are almost optimal. Our algorithms' effectiveness is demonstrated by presenting experimental results on a variety of low-level vision tasks, such as stereo matching, image restoration, image completion, and optical flow estimation, as well as on synthetic problems.", "Multilabel problems are of fundamental importance in computer vision and image analysis. Yet, finding global minima of the associated energies is typically a hard computational challenge. Recently, progress has been made by reverting to spatially continuous formulations of respective problems and solving the arising convex relaxation globally. In practice this leads to solutions which are either optimal or within an a posteriori bound of the optimum. Unfortunately, in previous methods, both run time and memory requirements scale linearly in the total number of labels, making these methods very inefficient and often not applicable to problems with higher dimensional label spaces. In this paper, we propose a reduction technique for the case that the label space is a continuous product space and the regularizer is separable, i.e., a sum of regularizers for each dimension of the label space. In typical real-world labeling problems, the resulting convex relaxation requires orders of magnitude less memory and c...", "Optimization techniques based on graph cuts have become a standard tool for many vision applications. These techniques allow to minimize efficiently certain energy functions corresponding to pairwise Markov random fields (MRFs). Currently, there is an accepted view within the computer vision community that graph cuts can only be used for optimizing a limited class of MRF energies (e.g., submodular functions). In this survey, we review some results that show that graph cuts can be applied to a much larger class of energy functions (in particular, nonsubmodular functions). While these results are well-known in the optimization community, to our knowledge they were not used in the context of computer vision and MRF optimization. We demonstrate the relevance of these results to vision on the problem of binary texture restoration.", "We study convex relaxations of the image labeling problem on a continuous domain with regularizers based on metric interaction potentials. The generic framework ensures existence of minimizers and covers a wide range of relaxations of the original combinatorial problem. We focus on two specific relaxations that differ in flexibility and simplicity—one can be used to tightly relax any metric interaction potential, while the other covers only Euclidean metrics but requires less computational effort. For solving the nonsmooth discretized problem, we propose a globally convergent Douglas-Rachford scheme and show that a sequence of dual iterates can be recovered in order to provide a posteriori optimality bounds. In a quantitative comparison to two other first-order methods, the approach shows competitive performance on synthetic and real-world images. By combining the method with an improved rounding technique for nonstandard potentials, we were able to routinely recover integral solutions within @math - @math of the global optimum for the combinatorial image labeling problem.", "While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories.", "", "", "In this work, we present a unified view on Markov random fields (MRFs) and recently proposed continuous tight convex relaxations for multilabel assignment in the image plane. These relaxations are far less biased toward the grid geometry than Markov random fields on grids. It turns out that the continuous methods are nonlinear extensions of the well-established local polytope MRF relaxation. In view of this result, a better understanding of these tight convex relaxations in the discrete setting is obtained. Further, a wider range of optimization methods is now applicable to find a minimizer of the tight formulation. We propose two methods to improve the efficiency of minimization. One uses a weaker, but more efficient continuously inspired approach as initialization and gradually refines the energy where it is necessary. The other one reformulates the dual energy enabling smooth approximations to be used for efficient optimization. We demonstrate the utility of our proposed minimization schemes in numerical experiments. Finally, we generalize the underlying energy formulation from isotropic metric smoothness costs to arbitrary nonmetric and orientation dependent smoothness terms." ] }
1611.05503
2949200298
Despite recent advances in multi-scale deep representations, their limitations are attributed to expensive parameters and weak fusion modules. Hence, we propose an efficient approach to fuse multi-scale deep representations, called convolutional fusion networks (CFN). Owing to using 1 @math 1 convolution and global average pooling, CFN can efficiently generate the side branches while adding few parameters. In addition, we present a locally-connected fusion module, which can learn adaptive weights for the side branches and form a discriminatively fused feature. CFN models trained on the CIFAR and ImageNet datasets demonstrate remarkable improvements over the plain CNNs. Furthermore, we generalize CFN to three new tasks, including scene recognition, fine-grained recognition and image retrieval. Our experiments show that it can obtain consistent improvements towards the transferring tasks.
In CNNs, intermediate layers can capture complimentary information to the top-most layers. For example, Ng, et al @cite_25 employed features from different intermediate layers and encoded them with VLAD scheme. Similarly, Cimpoi, et al @cite_18 and Wei, et al @cite_23 made use of Fisher Vectors to encode intermediate activations. Moreover, Liu, et al @cite_24 and Babenko, et al @cite_16 aggregated several intermediate activations and generated a more discriminative and expensive image descriptor. Based on intermediate layers, these methods are able to achieve promising performance on their tasks, as compared to using the fully-connected layers.
{ "cite_N": [ "@cite_18", "@cite_24", "@cite_23", "@cite_16", "@cite_25" ], "mid": [ "", "1960777822", "2244122599", "1833123814", "1922773808" ], "abstract": [ "", "A number of recent studies have shown that a Deep Convolutional Neural Network (DCNN) pretrained on a large dataset can be adopted as a universal image descriptor, and that doing so leads to impressive performance at a range of image classification tasks. Most of these studies, if not all, adopt activations of the fully-connected layer of a DCNN as the image or region representation and it is believed that convolutional layer activations are less discriminative. This paper, however, advocates that if used appropriately, convolutional layer activations constitute a powerful image representation. This is achieved by adopting a new technique proposed in this paper called cross-convolutional-layer pooling. More specifically, it extracts subarrays of feature maps of one convolutional layer as local features, and pools the extracted features with the guidance of the feature maps of the successive convolutional layer. Compared with existing methods that apply DCNNs in the similar local feature setting, the proposed method avoids the input image style mismatching issue which is usually encountered when applying fully connected layer activations to describe local regions. Also, the proposed method is easier to implement since it is codebook free and does not have any tuning parameters. By applying our method to four popular visual classification tasks, it is demonstrated that the proposed method can achieve comparable or in some cases significantly better performance than existing fully-connected layer based image representations.", "Semantic event recognition based only on image-based cues is a challenging problem in computer vision. In order to capture rich information and exploit important cues like human poses, human garments and scene categories, we propose the Deep Spatial Pyramid Ensemble framework, which is mainly based on our previous work, i.e., Deep Spatial Pyramid (DSP). DSP could build universal and powerful image representations from CNN models. Specifically, we employ five deep networks trained on different data sources to extract five corresponding DSP representations for event recognition images. For combining the complementary information from different DSP representations, we ensemble these features by both \"early fusion\" and \"late fusion\". Finally, based on the proposed framework, we come up with a solution for the track of the Cultural Event Recognition competition at the ChaLearn Looking at People (LAP) challenge in association with ICCV 2015. Our framework achieved one of the best cultural event recognition performance in this challenge.", "Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It has also been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregation approaches developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptors. In this paper we investigate possible ways to aggregate local deep features to produce compact global descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides arguably the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably.", "Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks." ] }
1611.05503
2949200298
Despite recent advances in multi-scale deep representations, their limitations are attributed to expensive parameters and weak fusion modules. Hence, we propose an efficient approach to fuse multi-scale deep representations, called convolutional fusion networks (CFN). Owing to using 1 @math 1 convolution and global average pooling, CFN can efficiently generate the side branches while adding few parameters. In addition, we present a locally-connected fusion module, which can learn adaptive weights for the side branches and form a discriminatively fused feature. CFN models trained on the CIFAR and ImageNet datasets demonstrate remarkable improvements over the plain CNNs. Furthermore, we generalize CFN to three new tasks, including scene recognition, fine-grained recognition and image retrieval. Our experiments show that it can obtain consistent improvements towards the transferring tasks.
To incorporate intermediate outputs explicitly during training, multi-scale fusion is presented to train multi-scale deep neural networks @cite_9 @cite_30 @cite_0 . A similar work in @cite_9 builded a DAG-CNNs model that summed up the multi-scale predictions from intermediate layers. However, DAG-CNNs required processing a large number of additional parameters. In addition, its fusion module (i.e. sum-pooling) failed to consider the importance of side branches. However, our CFN can learn adaptive weights for fusing side branches, while adding few parameters.
{ "cite_N": [ "@cite_30", "@cite_9", "@cite_0" ], "mid": [ "1903029394", "300523764", "" ], "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "We explore multi-scale convolutional neural nets (CNNs) for image classification. Contemporary approaches extract features from a single output layer. By extracting features from multiple layers, one can simultaneously reason about high, mid, and low-level features during classification. The resulting multi-scale architecture can itself be seen as a feed-forward model that is structured as a directed acyclic graph (DAG-CNNs). We use DAG-CNNs to learn a set of multi-scale features that can be effectively shared between coarse and fine-grained classification tasks. While fine-tuning such models helps performance, we show that even \"off-the-self\" multi-scale features perform quite well. We present extensive analysis and demonstrate state-of-the-art classification performance on three standard scene benchmarks (SUN397, MIT67, and Scene15). In terms of the heavily benchmarked MIT67 and Scene15 datasets, our results reduce the lowest previously-reported error by 23.9 and 9.5 , respectively.", "" ] }
1611.05250
2951570301
Convolutional neural networks have enabled accurate image super-resolution in real-time. However, recent attempts to benefit from temporal correlations in video super-resolution have been limited to naive or inefficient architectures. In this paper, we introduce spatio-temporal sub-pixel convolution networks that effectively exploit temporal redundancies and improve reconstruction accuracy while maintaining real-time speed. Specifically, we discuss the use of early fusion, slow fusion and 3D convolutions for the joint processing of multiple consecutive video frames. We also propose a novel joint motion compensation and video super-resolution algorithm that is orders of magnitude more efficient than competing methods, relying on a fast multi-resolution spatial transformer module that is end-to-end trainable. These contributions provide both higher accuracy and temporally more consistent videos, which we confirm qualitatively and quantitatively. Relative to single-frame models, spatio-temporal networks can either reduce the computational cost by 30 whilst maintaining the same quality or provide a 0.2dB gain for a similar computational cost. Results on publicly available datasets demonstrate that the proposed algorithms surpass current state-of-the-art performance in both accuracy and efficiency.
Video SR methods have mainly emerged as adaptations of image SR techniques. Kernel regression methods @cite_3 have been shown to be applicable to videos using 3D kernels instead of 2D ones @cite_4 . Dictionary learning approaches, which define LR images as a sparse linear combination of dictionary atoms coupled to a HR dictionary, have also been adapted from images @cite_1 to videos @cite_43 . Another approach is example-based patch recurrence, which assumes patches in a single image or video obey multi-scale relationships, and therefore missing high-frequency content at a given scale can be inferred from coarser scale patches. This was successfully presented by @cite_15 for image SR and has later been extended to videos @cite_31 .
{ "cite_N": [ "@cite_31", "@cite_4", "@cite_1", "@cite_3", "@cite_43", "@cite_15" ], "mid": [ "", "2275385910", "2088254198", "2006262236", "2295865687", "2534320940" ], "abstract": [ "", "Traditional methods for motion estimation estimate the motion field F between a pair of images as the one that minimizes a predesigned cost function. In this paper, we propose a direct method and train a Convolutional Neural Network (CNN) that when, at test time, is given a pair of images as input it produces a dense motion field F at its output layer. In the absence of large datasets with ground truth motion that would allow classical supervised training, we propose to train the network in an unsupervised manner. The proposed cost function that is optimized during training, is based on the classical optical flow constraint. The latter is differentiable with respect to the motion field and, therefore, allows backpropagation of the error to previous layers of the network. Our method is tested on both synthetic and real image sequences and performs similarly to the state-of-the-art methods.", "In this paper, we propose a novel coupled dictionary training method for single-image super-resolution (SR) based on patchwise sparse recovery, where the learned couple dictionaries relate the low- and high-resolution (HR) image patch spaces via sparse representation. The learning process enforces that the sparse representation of a low-resolution (LR) image patch in terms of the LR dictionary can well reconstruct its underlying HR image patch with the dictionary in the high-resolution image patch space. We model the learning problem as a bilevel optimization problem, where the optimization includes an l1-norm minimization problem in its constraints. Implicit differentiation is employed to calculate the desired gradient for stochastic gradient descent. We demonstrate that our coupled dictionary learning method can outperform the existing joint dictionary training method both quantitatively and qualitatively. Furthermore, for real applications, we speed up the algorithm approximately 10 times by learning a neural network model for fast sparse inference and selectively processing only those visually salient regions. Extensive experimental comparisons with state-of-the-art SR algorithms validate the effectiveness of our proposed approach.", "In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples", "", "Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales." ] }
1611.05250
2951570301
Convolutional neural networks have enabled accurate image super-resolution in real-time. However, recent attempts to benefit from temporal correlations in video super-resolution have been limited to naive or inefficient architectures. In this paper, we introduce spatio-temporal sub-pixel convolution networks that effectively exploit temporal redundancies and improve reconstruction accuracy while maintaining real-time speed. Specifically, we discuss the use of early fusion, slow fusion and 3D convolutions for the joint processing of multiple consecutive video frames. We also propose a novel joint motion compensation and video super-resolution algorithm that is orders of magnitude more efficient than competing methods, relying on a fast multi-resolution spatial transformer module that is end-to-end trainable. These contributions provide both higher accuracy and temporally more consistent videos, which we confirm qualitatively and quantitatively. Relative to single-frame models, spatio-temporal networks can either reduce the computational cost by 30 whilst maintaining the same quality or provide a 0.2dB gain for a similar computational cost. Results on publicly available datasets demonstrate that the proposed algorithms surpass current state-of-the-art performance in both accuracy and efficiency.
When adapting a method from images to videos it is usually beneficial to incorporate the prior knowledge that frames of the same scene of a video can be approximated by a single image and a motion pattern. Estimating and compensating motion is a powerful mechanism to further constrain the problem and expose temporal correlations. It is therefore very common to find video SR methods that explicitly model motion through frames. A natural choice has been to preprocess input frames by compensating inter-frame motion using displacement fields obtained from off-the-shelf optical flow algorithms @cite_4 . This nevertheless requires frame preprocessing and is usually expensive. Alternatively, motion compensation can also be performed jointly with the SR task, as done in the Bayesian approach of @cite_39 by iteratively estimating motion as part of its wider modeling of the downscaling process.
{ "cite_N": [ "@cite_4", "@cite_39" ], "mid": [ "2275385910", "1981990039" ], "abstract": [ "Traditional methods for motion estimation estimate the motion field F between a pair of images as the one that minimizes a predesigned cost function. In this paper, we propose a direct method and train a Convolutional Neural Network (CNN) that when, at test time, is given a pair of images as input it produces a dense motion field F at its output layer. In the absence of large datasets with ground truth motion that would allow classical supervised training, we propose to train the network in an unsupervised manner. The proposed cost function that is optimized during training, is based on the classical optical flow constraint. The latter is differentiable with respect to the motion field and, therefore, allows backpropagation of the error to previous layers of the network. Our method is tested on both synthetic and real image sequences and performs similarly to the state-of-the-art methods.", "Although multi-frame super resolution has been extensively studied in past decades, super resolving real-world video sequences still remains challenging. In existing systems, either the motion models are oversimplified, or important factors such as blur kernel and noise level are assumed to be known. Such models cannot deal with the scene and imaging conditions that vary from one sequence to another. In this paper, we propose a Bayesian approach to adaptive video super resolution via simultaneously estimating underlying motion, blur kernel and noise level while reconstructing the original high-res frames. As a result, our system not only produces very promising super resolution results that outperform the state of the art, but also adapts to a variety of noise levels and blur kernels. Theoretical analysis of the relationship between blur kernel, noise level and frequency-wise reconstruction rate is also provided, consistent with our experimental results." ] }
1611.05250
2951570301
Convolutional neural networks have enabled accurate image super-resolution in real-time. However, recent attempts to benefit from temporal correlations in video super-resolution have been limited to naive or inefficient architectures. In this paper, we introduce spatio-temporal sub-pixel convolution networks that effectively exploit temporal redundancies and improve reconstruction accuracy while maintaining real-time speed. Specifically, we discuss the use of early fusion, slow fusion and 3D convolutions for the joint processing of multiple consecutive video frames. We also propose a novel joint motion compensation and video super-resolution algorithm that is orders of magnitude more efficient than competing methods, relying on a fast multi-resolution spatial transformer module that is end-to-end trainable. These contributions provide both higher accuracy and temporally more consistent videos, which we confirm qualitatively and quantitatively. Relative to single-frame models, spatio-temporal networks can either reduce the computational cost by 30 whilst maintaining the same quality or provide a 0.2dB gain for a similar computational cost. Results on publicly available datasets demonstrate that the proposed algorithms surpass current state-of-the-art performance in both accuracy and efficiency.
The advent of neural network techniques that can be trained from data to approximate complex nonlinear functions has set new performance standards in many applications including SR . @cite_13 proposed to use a CNN architecture for single image SR that was later extended by @cite_8 in a video SR network (VSRnet) which jointly processes multiple input frames. Additionally, compensating the motion of input images with a TV -based optical flow algorithm showed an improved accuracy. Joint motion compensation for SR with neural networks has also been studied through recurrent bidirectional networks @cite_34 .
{ "cite_N": [ "@cite_34", "@cite_13", "@cite_8" ], "mid": [ "2184360182", "1885185971", "2320725294" ], "abstract": [ "Super resolving a low-resolution video is usually handled by either single-image super-resolution (SR) or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video super-resolution. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, which often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term contextual information of temporal sequences well, we propose a bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used recurrent full connections are replaced with weight-sharing convolutional connections and 2) conditional convolutional connections from previous input layers to the current hidden layer are added for enhancing visual-temporal dependency modelling. With the powerful temporal dependency modelling, our model can super resolve videos with complex motions and achieve state-of-the-art performance. Due to the cheap convolution operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame methods.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.", "Convolutional neural networks (CNN) are a special type of deep neural networks (DNN). They have so far been successfully applied to image super-resolution (SR) as well as other image restoration tasks. In this paper, we consider the problem of video super-resolution. We propose a CNN that is trained on both the spatial and the temporal dimensions of videos to enhance their spatial resolution. Consecutive frames are motion compensated and used as input to a CNN that provides super-resolved video frames as output. We investigate different options of combining the video frames within one CNN architecture. While large image databases are available to train deep neural networks, it is more challenging to create a large video database of sufficient quality to train neural nets for video restoration. We show that by using images to pretrain our model, a relatively small video database is sufficient for the training of our model to achieve and even improve upon the current state-of-the-art. We compare our proposed approach to current video as well as image SR algorithms." ] }
1611.05088
2950652153
Zero-shot learning (ZSL) models rely on learning a joint embedding space where both textual semantic description of object classes and visual representation of object images can be projected to for nearest neighbour search. Despite the success of deep neural networks that learn an end-to-end model between text and images in other vision problems such as image captioning, very few deep ZSL model exists and they show little advantage over ZSL models that utilise deep feature representations but do not learn an end-to-end embedding. In this paper we argue that the key to make deep ZSL models succeed is to choose the right embedding space. Instead of embedding into a semantic space or an intermediate space, we propose to use the visual space as the embedding space. This is because that in this space, the subsequent nearest neighbour search would suffer much less from the hubness problem and thus become more effective. This model design also provides a natural mechanism for multiple semantic modalities (e.g., attributes and sentence descriptions) to be fused and optimised jointly in an end-to-end manner. Extensive experiments on four benchmarks show that our model significantly outperforms the existing models.
Multiple semantic spaces are often complementary to each other; fusing them thus can potentially lead to improvements in recognition performance. Score-level fusion is perhaps the simplest strategy @cite_59 . More sophisticated multi-view embedding models have been proposed. @cite_24 learn a joint embedding semantic space between attribute, text and hierarchical relationship which relies heavily on hyperparameter search. Multi-view canonical correlation analysis (CCA) has also been employed @cite_55 to explore different modalities of testing data in a transductive way. Differing from these models, our neural network based model has an embedding layer to fuse different semantic spaces and connect the fused representation with the rest of the visual-semantic embedding network for end-to-end learning. Unlike @cite_55 , it is inductive and does not require to access the whole test set at once.
{ "cite_N": [ "@cite_24", "@cite_55", "@cite_59" ], "mid": [ "2044913453", "43954826", "1960364170" ], "abstract": [ "Image classification has advanced significantly in recent years with the availability of large-scale image sets. However, fine-grained classification remains a major challenge due to the annotation cost of large numbers of fine-grained categories. This project shows that compelling classification performance can be achieved on such categories even without labeled training data. Given image and class embeddings, we learn a compatibility function such that matching embeddings are assigned a higher score than mismatching ones; zero-shot classification of an image proceeds by finding the label yielding the highest joint compatibility score. We use state-of-the-art image features and focus on different supervised attributes and unsupervised output embeddings either derived from hierarchies or learned from unlabeled text corpora. We establish a substantially improved state-of-the-art on the Animals with Attributes and Caltech-UCSD Birds datasets. Most encouragingly, we demonstrate that purely unsupervised output embeddings (learned from Wikipedia and improved with finegrained text) achieve compelling results, even outperforming the previous supervised state-of-the-art. By combining different output embeddings, we further improve results.", "Most existing zero-shot learning approaches exploit transfer learning via an intermediate-level semantic representation such as visual attributes or semantic word vectors. Such a semantic representation is shared between an annotated auxiliary dataset and a target dataset with no annotation. A projection from a low-level feature space to the semantic space is learned from the auxiliary dataset and is applied without adaptation to the target dataset. In this paper we identify an inherent limitation with this approach. That is, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset domain are biased when applied directly to the target dataset domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding, to solve it. It is ‘transductive’ in that unlabelled target data points are explored for projection adaptation, and ‘multi-view’ in that both low-level feature (view) and multiple semantic representations (views) are embedded to rectify the projection shift. We demonstrate through extensive experiments that our framework (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) achieves state-of-the-art recognition results on image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.", "Object recognition by zero-shot learning (ZSL) aims to recognise objects without seeing any visual examples by learning knowledge transfer between seen and unseen object classes. This is typically achieved by exploring a semantic embedding space such as attribute space or semantic word vector space. In such a space, both seen and unseen class labels, as well as image features can be embedded (projected), and the similarity between them can thus be measured directly. Existing works differ in what embedding space is used and how to project the visual data into the semantic embedding space. Yet, they all measure the similarity in the space using a conventional distance metric (e.g. cosine) that does not consider the rich intrinsic structure, i.e. semantic manifold, of the semantic categories in the embedding space. In this paper we propose to model the semantic manifold in an embedding space using a semantic class label graph. The semantic manifold structure is used to redefine the distance metric in the semantic embedding space for more effective ZSL. The proposed semantic manifold distance is computed using a novel absorbing Markov chain process (AMP), which has a very efficient closed-form solution. The proposed new model improves upon and seamlessly unifies various existing ZSL algorithms. Extensive experiments on both the large scale ImageNet dataset and the widely used Animal with Attribute (AwA) dataset show that our model outperforms significantly the state-of-the-arts." ] }
1611.05088
2950652153
Zero-shot learning (ZSL) models rely on learning a joint embedding space where both textual semantic description of object classes and visual representation of object images can be projected to for nearest neighbour search. Despite the success of deep neural networks that learn an end-to-end model between text and images in other vision problems such as image captioning, very few deep ZSL model exists and they show little advantage over ZSL models that utilise deep feature representations but do not learn an end-to-end embedding. In this paper we argue that the key to make deep ZSL models succeed is to choose the right embedding space. Instead of embedding into a semantic space or an intermediate space, we propose to use the visual space as the embedding space. This is because that in this space, the subsequent nearest neighbour search would suffer much less from the hubness problem and thus become more effective. This model design also provides a natural mechanism for multiple semantic modalities (e.g., attributes and sentence descriptions) to be fused and optimised jointly in an end-to-end manner. Extensive experiments on four benchmarks show that our model significantly outperforms the existing models.
The hubness problem The phenomenon of the presence of universal' neighbours, or hubs, in a high-dimensional space for nearest neighbour search was first studied by @cite_20 . They show that hubness is an inherent property of data distributions in a high-dimensional vector space, and a specific aspect of the curse of dimensionality. A couple of recent studies @cite_19 @cite_52 noted that regression based zero-shot learning methods suffer from the hubness problem and proposed solutions to mitigate the hubness problem. Among them, the method in @cite_19 relies on the modelling of the global distribution of test unseen data ranks w.r.t. each class prototypes to ease the hubness problem. It is thus transductive. In contrast, the method in @cite_52 is inductive: It argued that least square regularised projection functions make the hubness problem worse and proposed to perform reverse regression, i.e., embedding class prototypes into the visual feature space. Our model also uses the visual feature space as the embedding space but achieve so by using an end-to-end deep neural network which yields far superior performance on ZSL.
{ "cite_N": [ "@cite_19", "@cite_52", "@cite_20" ], "mid": [ "1542713999", "1492420801", "2250646737" ], "abstract": [ "The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels. We show that the neighbourhoods of the mapped elements are strongly polluted by hubs, vectors that tend to be near a high proportion of items, pushing their correct labels down the neighbour list. After illustrating the problem empirically, we propose a simple method to correct it by taking the proximity distribution of potential neighbours across many mapped vectors into account. We show that this correction leads to consistent improvements in realistic zero-shot experiments in the cross-lingual, image labeling and image retrieval domains.", "This paper discusses the effect of hubness in zero-shot learning, when ridge regression is used to find a mapping between the example space to the label space. Contrary to the existing approach, which attempts to find a mapping from the example space to the label space, we show that mapping labels into the example space is desirable to suppress the emergence of hubs in the subsequent nearest neighbor search step. Assuming a simple data model, we prove that the proposed approach indeed reduces hubness. This was verified empirically on the tasks of bilingual lexicon extraction and image labeling: hubness was reduced with both of these tasks and the accuracy was improved accordingly.", "Zero-shot methods in language, vision and other domains rely on a cross-space mapping function that projects vectors from the relevant feature space (e.g., visualfeature-based image representations) to a large semantic word space (induced in an unsupervised way from corpus data), where the entities of interest (e.g., objects images depict) are labeled with the words associated to the nearest neighbours of the mapped vectors. Zero-shot cross-space mapping methods hold great promise as a way to scale up annotation tasks well beyond the labels in the training data (e.g., recognizing objects that were never seen in training). However, the current performance of cross-space mapping functions is still quite low, so that the strategy is not yet usable in practical applications. In this paper, we explore some general properties, both theoretical and empirical, of the cross-space mapping function, and we build on them to propose better methods to estimate it. In this way, we attain large improvements over the state of the art, both in cross-linguistic (word translation) and cross-modal (image labeling) zero-shot experiments." ] }
1611.05012
2951985495
The problem of multi-area interchange scheduling under system uncertainty is considered. A new scheduling technique is proposed for a multi-proxy bus system based on stochastic optimization that captures uncertainty in renewable generation and stochastic load. In particular, the proposed algorithm iteratively optimizes the interface flows using a multidimensional demand and supply functions. Optimality and convergence are guaranteed for both synchronous and asynchronous scheduling under nominal assumptions.
The second category includes the current industrial practices based on the so-called proxy bus approximation @cite_8 @cite_14 @cite_2 . The proxy bus is a trading location at which market participants can buy and sell electricity. In @cite_14 , a coordinated interchange scheduling scheme is proposed for the co-optimization of energy and ancillary services. The proposal of coordinated transaction scheduling (CTS) in @cite_8 is a state-of-the-art scheduling technique based on the economic argument using supply and demand functions exchanged by the neighboring operators. When there is only a single interface in a two-area system, such functions can be succinctly characterized, and the exchange is only made once; the need of iterations among operators is eliminated. Built upon the idea of CTS, a stochastic CTS for the two-area single-interface scheduling problem is proposed in @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_2", "@cite_8" ], "mid": [ "1986109627", "2962920885", "" ], "abstract": [ "This paper presents a joint market structure for energy, spinning reserves and VAR support in a multiarea setting. It is based on a cooptimization that can simultaneously optimize all three commodities across the \"seams\". An auxiliary problem principle based decomposition scheme is applied to the overall optimization for coordinating interchanges of energy and ancillary services between control areas. The proposed decomposition approach preserves independent dispatching for neighboring areas while achieves overall optimum. Nodal prices for energy and opportunity cost payments to forgone energy profit due to providing reserves and VAR support are also addressed. We believe the algorithm is of particular interest in the restructuring electricity industry for resolving seams issues. An illustrative example of a modified IEEE 30-bus system is used to demonstrate the validity of proposed algorithm.", "The problem of inter-regional interchange scheduling in the presence of stochastic generation and load is considered. An interchange scheduling technique based on a two-stage stochastic minimization of expected operating cost is proposed. Because directly solving the stochastic optimization is intractable, an equivalent problem that maximizes the expected social welfare is formulated. The proposed technique leverages the operator's capability of forecasting locational marginal prices and obtains the optimal interchange schedule without iterations among operators. Several extensions of the proposed technique are also discussed.", "" ] }
1611.05012
2951985495
The problem of multi-area interchange scheduling under system uncertainty is considered. A new scheduling technique is proposed for a multi-proxy bus system based on stochastic optimization that captures uncertainty in renewable generation and stochastic load. In particular, the proposed algorithm iteratively optimizes the interface flows using a multidimensional demand and supply functions. Optimality and convergence are guaranteed for both synchronous and asynchronous scheduling under nominal assumptions.
A shortcoming of existing techniques based on proxy bus approximations is the difficulty of generalizing it for multi-area interconnected systems where multiple scheduling interfaces have to be optimized simultaneously. The challenge arises from the fact that the interfaces cannot be succinctly characterized by a pair of expected demand and supply functions --- an essential property underlying the approach in @cite_2 for the single interface scheduling. When multiple interfaces are involved, the simple idea of equating expected demand and supply functions is not applicable and there is no simple notion that the intersection of demand and supply curves gives the social welfare optimizing interchange.
{ "cite_N": [ "@cite_2" ], "mid": [ "2962920885" ], "abstract": [ "The problem of inter-regional interchange scheduling in the presence of stochastic generation and load is considered. An interchange scheduling technique based on a two-stage stochastic minimization of expected operating cost is proposed. Because directly solving the stochastic optimization is intractable, an equivalent problem that maximizes the expected social welfare is formulated. The proposed technique leverages the operator's capability of forecasting locational marginal prices and obtains the optimal interchange schedule without iterations among operators. Several extensions of the proposed technique are also discussed." ] }
1611.04967
2563486500
Predictive models are increasingly deployed for the purpose of determining access to services such as credit, insurance, and employment. Despite potential gains in productivity and efficiency, several potential problems have yet to be addressed, particularly the potential for unintentional discrimination. We present an iterative procedure, based on orthogonal projection of input attributes, for enabling interpretability of black-box predictive models. Through our iterative procedure, one can quantify the relative dependence of a black-box model on its input attributes.The relative significance of the inputs to a predictive model can then be used to assess the fairness (or discriminatory extent) of such a model.
As automated decision-making systems began to gain widespread use in rendering decisions, researchers have begun to look at the issue of fairness and discrimination in data mining. Increasingly, the emerging subfield around the topic is known as discrimination aware data mining, or fairness aware data mining @cite_10 . The literature on fairness is broad including works from social choice theory, game theory, economics, and law @cite_0 . In the computer science literature, work on identifying and studying bias in predictive models has only begun to emerge in the past few years.
{ "cite_N": [ "@cite_0", "@cite_10" ], "mid": [ "2166454173", "2026019770" ], "abstract": [ "Most of the decisions in the today's knowledge society are taken on the basis of historical data by extracting models, patterns, profiles, and rules of human behavior in support of (automated) decision making. There is then the need of developing models, methods and technologies for modelling the processes of discrimination analysis in order to discover and prevent discrimination phenomena. In this respect, discrimination analysis from data should build over the large body of existing legal and economic studies. This paper intends to provide a multi-disciplinary survey of the literature on discrimination data analysis, including methods for data collection, empirical studies, controlled experiments, statistical evidence, and their legal requirements and grounds. We cover the following mainstream research lines: labour economic models, (quasi-)experimental approaches such as auditing and controlled experiments, profiling-based approaches such as racial profiling and credit markets, and the recently blooming research on knowledge discovery approaches.", "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory in the above sense. In this paper, the notion of discriminatory classification rules is introduced and studied. Providing a guarantee of non-discrimination is shown to be a non trivial task. A naive approach, like taking away all discriminatory attributes, is shown to be not enough when other background knowledge is available. Our approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge. An empirical assessment of the results on the German credit dataset is also provided." ] }
1611.04967
2563486500
Predictive models are increasingly deployed for the purpose of determining access to services such as credit, insurance, and employment. Despite potential gains in productivity and efficiency, several potential problems have yet to be addressed, particularly the potential for unintentional discrimination. We present an iterative procedure, based on orthogonal projection of input attributes, for enabling interpretability of black-box predictive models. Through our iterative procedure, one can quantify the relative dependence of a black-box model on its input attributes.The relative significance of the inputs to a predictive model can then be used to assess the fairness (or discriminatory extent) of such a model.
As another class of methodologies, algorithm manipulation methods seek to augment the underlying algorithm in order to reduce discrimination. Algorithm augmentation is usually done via a penalty that adds a cost of discrimination to a model’s cost function. These algorithms typically add regularizers that quantify the degree of bias. A seminal work in this area is the study by Kamishima et. al. in @cite_4 where they quantify prejudice by adding a mutual information based regularizer to the cost function of a logistic regression model. Since the Kamishima et. al. work, more approaches that seek to change underlying cost functions with regularizers for statistical parity have emerged for other kinds of algorithms like decision trees and support vector machines. Techniques presented in this area typically only work for one particular method like logistic regression or Naive Bayes, so the overall impact can be limited. Algorithm manipulation methods also assume that underlying predictive models are known, completely specified with well-behaved cost functions. Usually, this is not the case, as a variety of models are typically combined in a number of ways where it becomes difficult to untangle the cost function for the combined model.
{ "cite_N": [ "@cite_4" ], "mid": [ "2040825624" ], "abstract": [ "With the spread of data mining technologies and the accumulation of social data, such technologies and data are being used for determinations that seriously affect people's lives. For example, credit scoring is frequently determined based on the records of past credit data together with statistical prediction techniques. Needless to say, such determinations must be socially and legally fair from a viewpoint of social responsibility, namely, it must be unbiased and nondiscriminatory in sensitive features, such as race, gender, religion, and so on. Several researchers have recently begun to attempt the development of analysis techniques that are aware of social fairness or discrimination. They have shown that simply avoiding the use of sensitive features is insufficient for eliminating biases in determinations, due to the indirect influence of sensitive information. From a privacy-preserving viewpoint, this can be interpreted as hiding sensitive information when classification results are observed. In this paper, we first discuss three causes of unfairness in machine learning. We then propose a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models. We further apply this approach to logistic regression and empirically show its effectiveness and efficiency." ] }
1611.04967
2563486500
Predictive models are increasingly deployed for the purpose of determining access to services such as credit, insurance, and employment. Despite potential gains in productivity and efficiency, several potential problems have yet to be addressed, particularly the potential for unintentional discrimination. We present an iterative procedure, based on orthogonal projection of input attributes, for enabling interpretability of black-box predictive models. Through our iterative procedure, one can quantify the relative dependence of a black-box model on its input attributes.The relative significance of the inputs to a predictive model can then be used to assess the fairness (or discriminatory extent) of such a model.
In the third approach, other studies have presented work that manipulates the outcomes of predictive models towards achieving statistical parity across groups. In these cases, algorithms presented typically change the labels of data mining algorithms seeking to balance the outcomes across multiple groups. In @cite_10 , Pedreschi et. al. alter the confidence of classification rules inferred.
{ "cite_N": [ "@cite_10" ], "mid": [ "2026019770" ], "abstract": [ "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory in the above sense. In this paper, the notion of discriminatory classification rules is introduced and studied. Providing a guarantee of non-discrimination is shown to be a non trivial task. A naive approach, like taking away all discriminatory attributes, is shown to be not enough when other background knowledge is available. Our approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge. An empirical assessment of the results on the German credit dataset is also provided." ] }
1611.05170
2953051901
Over the last few years, the number of smart objects connected to the Internet has grown exponentially in comparison to the number of services and applications. The integration between Cloud Computing and Internet of Things, named as Cloud of Things, plays a key role in managing the connected things, their data and services. One of the main challenges in Cloud of Things is the resource discovery of the smart objects and their reuse in different contexts. Most of the existent work uses some kind of multi-criteria decision analysis algorithm to perform the resource discovery, but do not evaluate the impact that the user constraints has in the final solution. In this paper, we analyse the behaviour of the SAW, TOPSIS and VIKOR multi-objective decision analyses algorithms and the impact of user constraints on them. We evaluated the quality of the proposed solutions using the Pareto-optimality concept.
Nowadays there are several approaches that enable the smart objects management. @cite_3 and R " o @cite_29 present surveys that describes several architectures, techniques, methods, models, features, systems, applications, and middleware solutions related to the CoT context. In this section, firstly we present some architectures that enable the resource discovery of smart objects and next some works related to sensor discovery techniques.
{ "cite_N": [ "@cite_29", "@cite_3" ], "mid": [ "2119058983", "1968164389" ], "abstract": [ "We are observing an increasing trend of connecting embedded sensors and sensor networks to the Internet and publishing their output on the Web. We believe that this development is a precursor of a Web of Things, which gives real-world objects and places a Web presence that not only contains a static description of these entities, but also their real-time state. Just as document searches have become one of the most popular services on the Web, we argue that the search for real-world entities (i.e., people, places, and things) will become equally important. However, in contrast to the mostly static documents on the current Web, the state of real-world entities as captured by sensors is highly dynamic. Thus, searching for real-world entities with a certain state is a challenging problem. In this paper, we define the underlying problem, outline the design space of possible solutions, and survey relevant existing approaches by classifying them according to their design space. We also present a case study of a real-world search engine called Dyser designed by the authors.", "The Internet of Things (IoT) is part of the Internet of the future and will comprise billions of intelligent communicating “things” or Internet Connected Objects (ICOs) that will have sensing, actuating, and data processing capabilities. Each ICO will have one or more embedded sensors that will capture potentially enormous amounts of data. The sensors and related data streams can be clustered physically or virtually, which raises the challenge of searching and selecting the right sensors for a query in an efficient and effective way. This paper proposes a context-aware sensor search, selection, and ranking model, called CASSARAM, to address the challenge of efficiently selecting a subset of relevant sensors out of a large set of sensors with similar functionality and capabilities. CASSARAM considers user preferences and a broad range of sensor characteristics such as reliability, accuracy, location, battery life, and many more. This paper highlights the importance of sensor search, selection and ranking for the IoT, identifies important characteristics of both sensors and data capture processes, and discusses how semantic and quantitative reasoning can be combined together. This paper also addresses challenges such as efficient distributed sensor search and relational-expression based filtering. CASSARAM testing and performance evaluation results are presented and discussed." ] }
1611.05170
2953051901
Over the last few years, the number of smart objects connected to the Internet has grown exponentially in comparison to the number of services and applications. The integration between Cloud Computing and Internet of Things, named as Cloud of Things, plays a key role in managing the connected things, their data and services. One of the main challenges in Cloud of Things is the resource discovery of the smart objects and their reuse in different contexts. Most of the existent work uses some kind of multi-criteria decision analysis algorithm to perform the resource discovery, but do not evaluate the impact that the user constraints has in the final solution. In this paper, we analyse the behaviour of the SAW, TOPSIS and VIKOR multi-objective decision analyses algorithms and the impact of user constraints on them. We evaluated the quality of the proposed solutions using the Pareto-optimality concept.
Bovet and Hennebert @cite_30 proposes a P2P architecture for sensor discovery aiming robustness, reliability and efficiency in energetic terms. The authors present an ontology to describe the properties, functionalities and how to access to the subscribed devices. The SPARQL language is used to look for specific devices into the ontologies, which are stored in a distributed manner over the nodes of the architecture.
{ "cite_N": [ "@cite_30" ], "mid": [ "2082670328" ], "abstract": [ "Nowadays, our surrounding environment is more and more scattered with various types of sensors. Due to their intrinsic properties and representation formats, they form small islands isolated from each other. In order to increase interoperability and release their full capabilities, we propose to represent devices descriptions including data and service invocation with a common model allowing to compose mashups of heterogeneous sensors. Pushing this paradigm further, we also propose to augment service descriptions with a discovery protocol easing automatic assimilation of knowledge. In this work, we describe the architecture supporting what can be called a Semantic Sensor Web-of-Things. As proof of concept, we apply our proposal to the domain of smart buildings, composing a novel ontology covering heterogeneous sensing, actuation and service invocation. Our architecture also emphasizes on the energetic aspect and is optimized for constrained environments." ] }
1611.05170
2953051901
Over the last few years, the number of smart objects connected to the Internet has grown exponentially in comparison to the number of services and applications. The integration between Cloud Computing and Internet of Things, named as Cloud of Things, plays a key role in managing the connected things, their data and services. One of the main challenges in Cloud of Things is the resource discovery of the smart objects and their reuse in different contexts. Most of the existent work uses some kind of multi-criteria decision analysis algorithm to perform the resource discovery, but do not evaluate the impact that the user constraints has in the final solution. In this paper, we analyse the behaviour of the SAW, TOPSIS and VIKOR multi-objective decision analyses algorithms and the impact of user constraints on them. We evaluated the quality of the proposed solutions using the Pareto-optimality concept.
@cite_27 use domain name server as a scalable metadata repository to support the entity discovery using their location. The authors proposes the creation of a new domain, such as , which represent the entities of the real world. Thus, when a smart object becomes available it must register their characteristics and services into the DNS repository.
{ "cite_N": [ "@cite_27" ], "mid": [ "2139405088" ], "abstract": [ "Sensor technology is becoming pervasive in our everyday lives, measuring the real world around us. The Internet of Things enables sensor devices to become active citizens of the Internet, while the Web of Things envisions interoperability between these devices and their services. An important problem remains the need for discovering these devices and services globally, ad hoc in real-time, within acceptable time delays. Attempting to solve this problem using the existing Internet infrastructure, we explore the exploitation of the Domain Name System (DNS) as a scalable and ubiquitous directory mechanism for embedded devices. We examine the feasibility of this approach by performing a simulation involving up to one million embedded devices, to test system performance and scalability. Finally, we discuss practical issues and the overall potential of this approach." ] }
1611.05170
2953051901
Over the last few years, the number of smart objects connected to the Internet has grown exponentially in comparison to the number of services and applications. The integration between Cloud Computing and Internet of Things, named as Cloud of Things, plays a key role in managing the connected things, their data and services. One of the main challenges in Cloud of Things is the resource discovery of the smart objects and their reuse in different contexts. Most of the existent work uses some kind of multi-criteria decision analysis algorithm to perform the resource discovery, but do not evaluate the impact that the user constraints has in the final solution. In this paper, we analyse the behaviour of the SAW, TOPSIS and VIKOR multi-objective decision analyses algorithms and the impact of user constraints on them. We evaluated the quality of the proposed solutions using the Pareto-optimality concept.
@cite_13 proposes an architecture aiming to provide smart objects interoperability. Ontologies are used to describe these devices which are accessible using SPARQL language and semantic agents. It uses unique identifiers named to access and identify the devices of an specific network. The are stored inside distributed brokers, which are organized according to their location, owners or data.
{ "cite_N": [ "@cite_13" ], "mid": [ "2064471871" ], "abstract": [ "Pervasive computing and Internet of Things (IoTs) paradigms have created a huge potential for new business. To fully realize this potential, there is a need for a common way to abstract the heterogeneity of devices so that their functionality can be represented as a virtual computing platform. To this end, we present novel semantic level interoperability architecture for pervasive computing and IoTs. There are two main principles in the proposed architecture. First, information and capabilities of devices are represented with semantic web knowledge representation technologies and interaction with devices and the physical world is achieved by accessing and modifying their virtual representations. Second, global IoT is divided into numerous local smart spaces managed by a semantic information broker (SIB) that provides a means to monitor and update the virtual representation of the physical world. An integral part of the architecture is a resolution infrastructure that provides a means to resolve the network address of a SIB either using a physical object identifier as a pointer to information or by searching SIBs matching a specification represented with SPARQL. We present several reference implementations and applications that we have developed to evaluate the architecture in practice. The evaluation also includes performance studies that, together with the applications, demonstrate the suitability of the architecture to real-life IoT scenarios. In addition, to validate that the proposed architecture conforms to the common IoT-A architecture reference model (ARM), we map the central components of the architecture to the IoT-ARM." ] }
1611.05170
2953051901
Over the last few years, the number of smart objects connected to the Internet has grown exponentially in comparison to the number of services and applications. The integration between Cloud Computing and Internet of Things, named as Cloud of Things, plays a key role in managing the connected things, their data and services. One of the main challenges in Cloud of Things is the resource discovery of the smart objects and their reuse in different contexts. Most of the existent work uses some kind of multi-criteria decision analysis algorithm to perform the resource discovery, but do not evaluate the impact that the user constraints has in the final solution. In this paper, we analyse the behaviour of the SAW, TOPSIS and VIKOR multi-objective decision analyses algorithms and the impact of user constraints on them. We evaluated the quality of the proposed solutions using the Pareto-optimality concept.
Carlson and Schrader @cite_28 presents a search engine named Ambient Ocean to discovery and select sensors using context information. This search engine uses a local stored metadata to define the device context and perform the search in a more efficiently way. The search engine uses similarity multi-task models based on the Weighted Slope One algorithm. In scenarios that is hard to model the devices features, the Ambient Ocean applies collaborative filters techniques to compute the similarity between users or sensors using previous information.
{ "cite_N": [ "@cite_28" ], "mid": [ "2018589856" ], "abstract": [ "Context-awareness is becoming an important foundation of adaptive mobile systems, however, techniques for discovering contextually relevant Web content and Smart Devices (i.e., Smart Resources) remain consigned to small-scale deployments. To address this limitation, this paper introduces Ambient Ocean, a Web search engine for context-aware Smart Resource discovery. Ocean provides scalable mechanisms for supplementing Resources with expressive contextual metadata as a means of facilitating in-situ discovery and composition. Ocean supports queries based on arbitrary contextual data, such as location, biometric details, telemetry data, situational cues, sensor information, etc. Ocean utilizes a combination of crowd-sourcing, context-enhanced query expansion and personalization techniques to continually optimize query results over time. This paper presents Ocean's conceptual foundations, its reference implementation, and a preliminary evaluation that demonstrates significantly improved Smart Resource discovery results in real-world environments." ] }
1611.05170
2953051901
Over the last few years, the number of smart objects connected to the Internet has grown exponentially in comparison to the number of services and applications. The integration between Cloud Computing and Internet of Things, named as Cloud of Things, plays a key role in managing the connected things, their data and services. One of the main challenges in Cloud of Things is the resource discovery of the smart objects and their reuse in different contexts. Most of the existent work uses some kind of multi-criteria decision analysis algorithm to perform the resource discovery, but do not evaluate the impact that the user constraints has in the final solution. In this paper, we analyse the behaviour of the SAW, TOPSIS and VIKOR multi-objective decision analyses algorithms and the impact of user constraints on them. We evaluated the quality of the proposed solutions using the Pareto-optimality concept.
@cite_23 shows an architecture named DQS-Cloud to optimize the sensor search, autonomous fault tolerance mechanism and avoid SLA violations. The search is based on keywords and in the QoS attributes desired by the users. The DQS-Cloud aims to minimize the communication overhead reusing data flows with similar QoS levels. The results shows that the DQS-Cloud was capable to minimize the bandwidth and processing rate in the providers.
{ "cite_N": [ "@cite_23" ], "mid": [ "2050954217" ], "abstract": [ "With the advent of Internet of Things, the field of domain sensing is increasingly being servitized. In order to effectively support this servitization, there is a growing need for a powerful and easy-to-use infrastructure that enables seamless sharing of sensor data in real-time. In this paper, we present the design and evaluation of Data Quality-Aware Sensor Cloud (DQS-Cloud), a cloud-based sensor data services infrastructure. DQS-Cloud is characterized by three novel features. First, data-quality is pervasive throughout the infrastructure ranging from feed discovery to failure resilience. Second, it incorporates autonomic-computing-based techniques for dealing with sensor failures as well as data quality dynamics. Third, DQS-Cloud also features a unique sensor stream management engine that optimizes the system performance by dynamically placing stream management operators. This paper reports several experiments to study the effectiveness and the efficiency of the framework." ] }
1611.05170
2953051901
Over the last few years, the number of smart objects connected to the Internet has grown exponentially in comparison to the number of services and applications. The integration between Cloud Computing and Internet of Things, named as Cloud of Things, plays a key role in managing the connected things, their data and services. One of the main challenges in Cloud of Things is the resource discovery of the smart objects and their reuse in different contexts. Most of the existent work uses some kind of multi-criteria decision analysis algorithm to perform the resource discovery, but do not evaluate the impact that the user constraints has in the final solution. In this paper, we analyse the behaviour of the SAW, TOPSIS and VIKOR multi-objective decision analyses algorithms and the impact of user constraints on them. We evaluated the quality of the proposed solutions using the Pareto-optimality concept.
@cite_20 proposes the to integrate dataflows at runtime. The sensors and their flows are described according to the SSN ontology and are stored in a repository with their QoS and QoI attributes. It is able to search and select the registered sensors with regards to the specified QoS and QoI levels using the Simple-Additive-Weighting algorithm.
{ "cite_N": [ "@cite_20" ], "mid": [ "2406692100" ], "abstract": [ "With the growing popularity of Internet of Things (IoT) technologies and sensors deployment, more and more cities are leaning towards the initiative of smart cities. Smart city applications are mostly developed with aims to solve domain-specific problems. Hence, lacking the ability to automatically discover and integrate heterogeneous sensor data streams on the fly. To provide a domain-independent platform and take full benefits from semantic technologies, in this paper we present an Automated Complex Event I mplementation System (ACEIS), which serves as a middleware between sensor data streams and smart city applications. ACEIS discovers and integrates IoT streams in urban infrastructures for users' requirements expressed as complex event requests, based on semantic IoT stream descriptions. It also processes complex event patterns on the fly using semantic data streams." ] }
1611.05170
2953051901
Over the last few years, the number of smart objects connected to the Internet has grown exponentially in comparison to the number of services and applications. The integration between Cloud Computing and Internet of Things, named as Cloud of Things, plays a key role in managing the connected things, their data and services. One of the main challenges in Cloud of Things is the resource discovery of the smart objects and their reuse in different contexts. Most of the existent work uses some kind of multi-criteria decision analysis algorithm to perform the resource discovery, but do not evaluate the impact that the user constraints has in the final solution. In this paper, we analyse the behaviour of the SAW, TOPSIS and VIKOR multi-objective decision analyses algorithms and the impact of user constraints on them. We evaluated the quality of the proposed solutions using the Pareto-optimality concept.
@cite_3 present the CASSARAM framework to perform the sensor search and selection regarding user context properties. It uses the Semantic Sensor Network Ontology (SSN) to retrieve and model user context properties. CASSARAM users use semi-negotiable context properties, which allow to define context properties values in a range. Thus, the proposed Relational-Expression based Filtering can be applied to ignore irrelevant sensors during the semantic querying. Also, the Comparative-Priority Based Heuristic Filtering is used to remove the sensors that are far from the ideal point prioritizing the TOP-K selection.
{ "cite_N": [ "@cite_3" ], "mid": [ "1968164389" ], "abstract": [ "The Internet of Things (IoT) is part of the Internet of the future and will comprise billions of intelligent communicating “things” or Internet Connected Objects (ICOs) that will have sensing, actuating, and data processing capabilities. Each ICO will have one or more embedded sensors that will capture potentially enormous amounts of data. The sensors and related data streams can be clustered physically or virtually, which raises the challenge of searching and selecting the right sensors for a query in an efficient and effective way. This paper proposes a context-aware sensor search, selection, and ranking model, called CASSARAM, to address the challenge of efficiently selecting a subset of relevant sensors out of a large set of sensors with similar functionality and capabilities. CASSARAM considers user preferences and a broad range of sensor characteristics such as reliability, accuracy, location, battery life, and many more. This paper highlights the importance of sensor search, selection and ranking for the IoT, identifies important characteristics of both sensors and data capture processes, and discusses how semantic and quantitative reasoning can be combined together. This paper also addresses challenges such as efficient distributed sensor search and relational-expression based filtering. CASSARAM testing and performance evaluation results are presented and discussed." ] }
1611.05109
2950374204
Pooling second-order local feature statistics to form a high-dimensional bilinear feature has been shown to achieve state-of-the-art performance on a variety of fine-grained classification tasks. To address the computational demands of high feature dimensionality, we propose to represent the covariance features as a matrix and apply a low-rank bilinear classifier. The resulting classifier can be evaluated without explicitly computing the bilinear feature map which allows for a large reduction in the compute time as well as decreasing the effective number of parameters to be learned. To further compress the model, we propose classifier co-decomposition that factorizes the collection of bilinear classifiers into a common factor and compact per-class terms. The co-decomposition idea can be deployed through two convolutional layers and trained in an end-to-end architecture. We suggest a simple yet effective initialization that avoids explicitly first training and factorizing the larger bilinear classifiers. Through extensive experiments, we show that our model achieves state-of-the-art performance on several public datasets for fine-grained classification trained with only category labels. Importantly, our final model is an order of magnitude smaller than the recently proposed compact bilinear model, and three orders smaller than the standard bilinear CNN model.
One approach to dealing with such nuisance parameters has been to exploit strong supervision, such as detailed part-level, keypoint-level and attribute annotations @cite_33 @cite_8 @cite_26 . These methods learn to localize semantic parts or keypoints and extract corresponding features which are used as a holistic representation for final classification. Strong supervision with part annotations has been shown to significantly improve the fine-grained recognition accuracy. However, such supervised annotations are costly to obtain.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_8" ], "mid": [ "1898560071", "2275770195", "2949334740" ], "abstract": [ "Scaling up fine-grained recognition to all domains of fine-grained objects is a challenge the computer vision community will need to face in order to realize its goal of recognizing all object categories. Current state-of-the-art techniques rely heavily upon the use of keypoint or part annotations, but scaling up to hundreds or thousands of domains renders this annotation cost-prohibitive for all but the most important categories. In this work we propose a method for fine-grained recognition that uses no part annotations. Our method is based on generating parts using co-segmentation and alignment, which we combine in a discriminative mixture. Experimental results show its efficacy, demonstrating state-of-the-art results even when compared to methods that use part annotations during training.", "Pose variation and subtle differences in appearance are key challenges to fine-grained classification. While deep networks have markedly improved general recognition, many approaches to fine-grained recognition rely on anchoring networks to parts for better accuracy. Identifying parts to find correspondence discounts pose variation so that features can be tuned to appearance. To this end previous methods have examined how to find parts and extract pose-normalized features. These methods have generally separated fine-grained recognition into stages which first localize parts using hand-engineered and coarsely-localized proposal features, and then separately learn deep descriptors centered on inferred part positions. We unify these steps in an end-to-end trainable network supervised by keypoint locations and class labels that localizes parts by a fully convolutional network to focus the learning of feature representations for the fine-grained classification task. Experiments on the popular CUB200 dataset show that our method is state-of-the-art and suggest a continuing role for strong supervision.", "In the context of fine-grained visual categorization, the ability to interpret models as human-understandable visual manuals is sometimes as important as achieving high classification accuracy. In this paper, we propose a novel Part-Stacked CNN architecture that explicitly explains the fine-grained recognition process by modeling subtle differences from object parts. Based on manually-labeled strong part annotations, the proposed architecture consists of a fully convolutional network to locate multiple object parts and a two-stream classification network that en- codes object-level and part-level cues simultaneously. By adopting a set of sharing strategies between the computation of multiple object parts, the proposed architecture is very efficient running at 20 frames sec during inference. Experimental results on the CUB-200-2011 dataset reveal the effectiveness of the proposed architecture, from both the perspective of classification accuracy and model interpretability." ] }
1611.05109
2950374204
Pooling second-order local feature statistics to form a high-dimensional bilinear feature has been shown to achieve state-of-the-art performance on a variety of fine-grained classification tasks. To address the computational demands of high feature dimensionality, we propose to represent the covariance features as a matrix and apply a low-rank bilinear classifier. The resulting classifier can be evaluated without explicitly computing the bilinear feature map which allows for a large reduction in the compute time as well as decreasing the effective number of parameters to be learned. To further compress the model, we propose classifier co-decomposition that factorizes the collection of bilinear classifiers into a common factor and compact per-class terms. The co-decomposition idea can be deployed through two convolutional layers and trained in an end-to-end architecture. We suggest a simple yet effective initialization that avoids explicitly first training and factorizing the larger bilinear classifiers. Through extensive experiments, we show that our model achieves state-of-the-art performance on several public datasets for fine-grained classification trained with only category labels. Importantly, our final model is an order of magnitude smaller than the recently proposed compact bilinear model, and three orders smaller than the standard bilinear CNN model.
To alleviate the costly collection of part annotations, some have proposed to utilize interactive learning @cite_36 . Partially supervised discovery of discriminative parts from category labels is also a compelling approach @cite_24 , especially given the effectiveness of training with web-scale datasets @cite_16 . One approach to unsupervised part discovery @cite_12 @cite_22 uses saliency maps, leveraging the observation that sparse deep CNN feature activations often correspond to semantically meaningful regions @cite_6 @cite_1 . Another recent approach @cite_19 selects parts from a pool of patch candidates by searching over patch triplets, but relies heavily on training images being aligned w.r.t the object pose. Spatial transformer networks @cite_31 are a very general formulation that explicitly model latent transformations that align feature maps prior to classification. They can be trained end-to-end using only classification loss and have achieved state-of-the-art performance on the very challenging CUB bird dataset @cite_32 , but the resulting models are large and stable optimization is non-trivial.
{ "cite_N": [ "@cite_22", "@cite_36", "@cite_1", "@cite_32", "@cite_6", "@cite_24", "@cite_19", "@cite_31", "@cite_16", "@cite_12" ], "mid": [ "2949194058", "2293277011", "", "", "2952186574", "2346933327", "", "2951005624", "2951558862", "2962851944" ], "abstract": [ "Current fine-grained classification approaches often rely on a robust localization of object parts to extract localized feature representations suitable for discrimination. However, part localization is a challenging task due to the large variation of appearance and pose. In this paper, we show how pre-trained convolutional neural networks can be used for robust and efficient object part discovery and localization without the necessity to actually train the network on the current dataset. Our approach called \"part detector discovery\" (PDD) is based on analyzing the gradient maps of the network outputs and finding activation centers spatially related to annotated semantic parts or bounding boxes. This allows us not just to obtain excellent performance on the CUB200-2011 dataset, but in contrast to previous approaches also to perform detection and bird classification jointly without requiring a given bounding box annotation during testing and ground-truth parts during training. The code is available at this http URL and this https URL", "Existing fine-grained visual categorization methods often suffer from three challenges: lack of training data, large number of fine-grained categories, and high intraclass vs. low inter-class variance. In this work we propose a generic iterative framework for fine-grained categorization and dataset bootstrapping that handles these three challenges. Using deep metric learning with humans in the loop, we learn a low dimensional feature embedding with anchor points on manifolds for each category. These anchor points capture intra-class variances and remain discriminative between classes. In each round, images with high confidence scores from our model are sent to humans for labeling. By comparing with exemplar images, labelers mark each candidate image as either a \"true positive\" or a \"false positive\". True positives are added into our current dataset and false positives are regarded as \"hard negatives\" for our metric learning model. Then the model is retrained with an expanded dataset and hard negatives for the next round. To demonstrate the effectiveness of the proposed framework, we bootstrap a fine-grained flower dataset with 620 categories from Instagram images. The proposed deep metric learning scheme is evaluated on both our dataset and the CUB-200-2001 Birds dataset. Experimental evaluations show significant performance gain using dataset bootstrapping and demonstrate state-of-the-art results achieved by the proposed deep metric learning methods.", "", "", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "We propose a robust approach for performing automatic species-level recognition of fossil pollen grains in microscopy images that exploits both global shape and local texture characteristics in a patch-based matching methodology. We introduce a novel criteria for selecting meaningful and discriminative exemplar patches. We optimize this function during training using a greedy submodular function optimization framework that gives a near-optimal solution with bounded approximation error. We use these selected exemplars as a dictionary basis and propose a spatially-aware sparse coding method to match testing images for identification while maintaining global shape correspondence. To accelerate the coding process for fast matching, we introduce a relaxed form that uses spatially-aware soft-thresholding during coding. Finally, we carry out an experimental study that demonstrates the effectiveness and efficiency of our exemplar selection and classification mechanisms, achieving @math accuracy on a difficult fine-grained species classification task distinguishing three types of fossil spruce pollen.", "", "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.", "Current approaches for fine-grained recognition do the following: First, recruit experts to annotate a dataset of images, optionally also collecting more structured data in the form of part annotations and bounding boxes. Second, train a model utilizing this data. Toward the goal of solving fine-grained recognition, we introduce an alternative approach, leveraging free, noisy data from the web and simple, generic methods of recognition. This approach has benefits in both performance and scalability. We demonstrate its efficacy on four fine-grained datasets, greatly exceeding existing state of the art without the manual collection of even a single label, and furthermore show first results at scaling to more than 10,000 fine-grained categories. Quantitatively, we achieve top-1 accuracies of 92.3 on CUB-200-2011, 85.4 on Birdsnap, 93.4 on FGVC-Aircraft, and 80.8 on Stanford Dogs without using their annotated training sets. We compare our approach to an active learning approach for expanding fine-grained datasets.", "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13]." ] }
1611.05109
2950374204
Pooling second-order local feature statistics to form a high-dimensional bilinear feature has been shown to achieve state-of-the-art performance on a variety of fine-grained classification tasks. To address the computational demands of high feature dimensionality, we propose to represent the covariance features as a matrix and apply a low-rank bilinear classifier. The resulting classifier can be evaluated without explicitly computing the bilinear feature map which allows for a large reduction in the compute time as well as decreasing the effective number of parameters to be learned. To further compress the model, we propose classifier co-decomposition that factorizes the collection of bilinear classifiers into a common factor and compact per-class terms. The co-decomposition idea can be deployed through two convolutional layers and trained in an end-to-end architecture. We suggest a simple yet effective initialization that avoids explicitly first training and factorizing the larger bilinear classifiers. Through extensive experiments, we show that our model achieves state-of-the-art performance on several public datasets for fine-grained classification trained with only category labels. Importantly, our final model is an order of magnitude smaller than the recently proposed compact bilinear model, and three orders smaller than the standard bilinear CNN model.
Recently, a surprisingly simple method called bilinear pooling @cite_30 has achieved state-of-the-art performance on a variety of fine-grained classification problems. Bilinear pooling collects second-order statistics of local features over a whole image to form holistic representation for classification. Second-order or higher-order statistics have been explored in a number of vision tasks (see e.g. @cite_27 @cite_28 ). In the context of fine-grained recognition, spatial pooling introduces invariance to deformations while second-order statistics maintain selectivity.
{ "cite_N": [ "@cite_30", "@cite_27", "@cite_28" ], "mid": [ "", "78159342", "2443864910" ], "abstract": [ "", "Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and max-pooling that together with appropriate non-linearities lead to state-of-the-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster.", "Super-symmetric tensors – a higher-order extension of scatter matrices – are becoming increasingly popular in machine learning and computer vision for modeling data statistics, co-occurrences, or even as visual descriptors. They were shown recently to outperform second-order approaches [18], however, the size of these tensors are exponential in the data dimensionality, which is a significant concern. In this paper, we study third-order supersymmetric tensor descriptors in the context of dictionary learning and sparse coding. For this purpose, we propose a novel non-linear third-order texture descriptor. Our goal is to approximate these tensors as sparse conic combinations of atoms from a learned dictionary. Apart from the significant benefits to tensor compression that this framework offers, our experiments demonstrate that the sparse coefficients produced by this scheme lead to better aggregation of high-dimensional data and showcase superior performance on two common computer vision tasks compared to the state of the art." ] }
1611.05109
2950374204
Pooling second-order local feature statistics to form a high-dimensional bilinear feature has been shown to achieve state-of-the-art performance on a variety of fine-grained classification tasks. To address the computational demands of high feature dimensionality, we propose to represent the covariance features as a matrix and apply a low-rank bilinear classifier. The resulting classifier can be evaluated without explicitly computing the bilinear feature map which allows for a large reduction in the compute time as well as decreasing the effective number of parameters to be learned. To further compress the model, we propose classifier co-decomposition that factorizes the collection of bilinear classifiers into a common factor and compact per-class terms. The co-decomposition idea can be deployed through two convolutional layers and trained in an end-to-end architecture. We suggest a simple yet effective initialization that avoids explicitly first training and factorizing the larger bilinear classifiers. Through extensive experiments, we show that our model achieves state-of-the-art performance on several public datasets for fine-grained classification trained with only category labels. Importantly, our final model is an order of magnitude smaller than the recently proposed compact bilinear model, and three orders smaller than the standard bilinear CNN model.
However, the representational power of bilinear features comes at the cost of very high-dimensional feature representations (see Figure (b)), which induce substantial computational burdens and require large quantities of training data to fit. To reduce the model size, Gao al @cite_18 proposed using compact models based on either random Maclaurin @cite_17 or tensor sketch @cite_25 . These methods approximate the classifier applied to bilinear pooled feature by the Hadamard product of projected local features with a large random matrix (Figure (c)). These compact models maintain similar performance to the full bilinear feature with a $90 in the number of learned parameters.
{ "cite_N": [ "@cite_18", "@cite_25", "@cite_17" ], "mid": [ "2963066927", "2146897752", "2105527258" ], "abstract": [ "Bilinear models has been shown to achieve impressive performance on a wide range of visual tasks, such as semantic segmentation, fine grained recognition and face recognition. However, bilinear features are high dimensional, typically on the order of hundreds of thousands to a few million, which makes them impractical for subsequent analysis. We propose two compact bilinear representations with the same discriminative power as the full bilinear representation but with only a few thousand dimensions. Our compact representations allow back-propagation of classification errors enabling an end-to-end optimization of the visual recognition system. The compact bilinear representations are derived through a novel kernelized analysis of bilinear pooling which provide insights into the discriminative power of bilinear pooling, and a platform for further research in compact pooling methods. Experimentation illustrate the utility of the proposed representations for image classification and few-shot learning across several datasets.", "Approximation of non-linear kernels using random feature mapping has been successfully employed in large-scale data analysis applications, accelerating the training of kernel machines. While previous random feature mappings run in O(ndD) time for @math training samples in d-dimensional space and D random feature maps, we propose a novel randomized tensor product technique, called Tensor Sketching, for approximating any polynomial kernel in O(n(d+D D )) time. Also, we introduce both absolute and relative error bounds for our approximation to guarantee the reliability of our estimation algorithm. Empirically, Tensor Sketching achieves higher accuracy and often runs orders of magnitude faster than the state-of-the-art approach for large-scale real-world datasets.", "Approximating non-linear kernels using feature maps has gained a lot of interest in recent years due to applications in reducing training and testing times of SVM classifiers and other kernel based learning algorithms. We extend this line of work and present low distortion embeddings for dot product kernels into linear Euclidean spaces. We base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to define randomized feature maps into explicit low dimensional Euclidean spaces in which the native dot product provides an approximation to the dot product kernel with high confidence." ] }
1611.05053
2951152493
Reconstructing the detailed geometric structure of a face from a given image is a key to many computer vision and graphics applications, such as motion capture and reenactment. The reconstruction task is challenging as human faces vary extensively when considering expressions, poses, textures, and intrinsic geometries. While many approaches tackle this complexity by using additional data to reconstruct the face of a single subject, extracting facial surface from a single image remains a difficult problem. As a result, single-image based methods can usually provide only a rough estimate of the facial geometry. In contrast, we propose to leverage the power of convolutional neural networks to produce a highly detailed face reconstruction from a single image. For this purpose, we introduce an end-to-end CNN framework which derives the shape in a coarse-to-fine fashion. The proposed architecture is composed of two main blocks, a network that recovers the coarse facial geometry (CoarseNet), followed by a CNN that refines the facial features of that geometry (FineNet). The proposed networks are connected by a novel layer which renders a depth image given a mesh in 3D. Unlike object recognition and detection problems, there are no suitable datasets for training CNNs to perform face geometry reconstruction. Therefore, our training regime begins with a supervised phase, based on synthetic images, followed by an unsupervised phase that uses only unconstrained facial images. The accuracy and robustness of the proposed model is demonstrated by both qualitative and quantitative evaluation tests.
In @cite_26 , Vetter and Blantz introduced the 3D Morphable Model (3DMM), a principal components analysis (PCA) basis for representing faces. One of the advantages of using the 3DMM is that the solution space is constrained to represent only likely solutions, thereby simplifying the problem. While the original paper assumes manual initialization, more recent efforts propose an automatic reconstruction process @cite_21 @cite_30 . Still, the automated initialization pipelines usually do not produce the same quality of reconstructions when only one image is used, as noted in @cite_47 . In addition, the 3DMM solutions cannot extract fine details since they are not spanned by the principal components.
{ "cite_N": [ "@cite_30", "@cite_47", "@cite_26", "@cite_21" ], "mid": [ "", "2464650832", "2237250383", "2156119076" ], "abstract": [ "", "Automated 3D reconstruction of faces from images is challenging if the image material is difficult in terms of pose, lighting, occlusions and facial expressions, and if the initial 2D feature positions are inaccurate or unreliable. We propose a method that reconstructs individual 3D shapes from multiple single images of one person, judges their quality and then combines the best of all results. This is done separately for different regions of the face. The core element of this algorithm and the focus of our paper is a quality measure that judges a reconstruction without information about the true shape. We evaluate different quality measures, develop a method for combining results, and present a complete processing pipeline for automated reconstruction.", "In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.", "This paper presents a fully automated algorithm for reconstructing a textured 3D model of a face from a single photograph or a raw video stream. The algorithm is based on a combination of Support Vector Machines (SVMs) and a Morphable Model of 3D faces. After SVM face detection, individual facial features are detected using a novel regression- and classification-based approach, and probabilistically plausible configurations of features are selected to produce a list of candidates for several facial feature positions. In the next step, the configurations of feature points are evaluated using a novel criterion that is based on a Morphable Model and a combination of linear projections. To make the algorithm robust with respect to head orientation, this process is iterated while the estimate of pose is refined. Finally, the feature points initialize a model-fitting procedure of the Morphable Model. The result is a high resolution 3D surface model." ] }
1611.05053
2951152493
Reconstructing the detailed geometric structure of a face from a given image is a key to many computer vision and graphics applications, such as motion capture and reenactment. The reconstruction task is challenging as human faces vary extensively when considering expressions, poses, textures, and intrinsic geometries. While many approaches tackle this complexity by using additional data to reconstruct the face of a single subject, extracting facial surface from a single image remains a difficult problem. As a result, single-image based methods can usually provide only a rough estimate of the facial geometry. In contrast, we propose to leverage the power of convolutional neural networks to produce a highly detailed face reconstruction from a single image. For this purpose, we introduce an end-to-end CNN framework which derives the shape in a coarse-to-fine fashion. The proposed architecture is composed of two main blocks, a network that recovers the coarse facial geometry (CoarseNet), followed by a CNN that refines the facial features of that geometry (FineNet). The proposed networks are connected by a novel layer which renders a depth image given a mesh in 3D. Unlike object recognition and detection problems, there are no suitable datasets for training CNNs to perform face geometry reconstruction. Therefore, our training regime begins with a supervised phase, based on synthetic images, followed by an unsupervised phase that uses only unconstrained facial images. The accuracy and robustness of the proposed model is demonstrated by both qualitative and quantitative evaluation tests.
An alternative approach is to solve the problem by deforming a template to match the input image. One notable paper is that of Kemelmacher-Shlizerman and Basri @cite_5 . There, a reference model is aligned with the face image and a shape-from-shading (SfS) process is applied to mold the reference model to better match the image. Similarly, Hassner @cite_16 proposed to jointly maximize the appearance and depth similarities between the input image and a template face using SIFTflow @cite_38 . While these methods do a better job in recovering the fine facial features, their capability to capture the global face structure is limited by the provided template initialization.
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_38" ], "mid": [ "2146566773", "2136863438", "" ], "abstract": [ "Human faces are remarkably similar in global properties, including size, aspect ratio, and location of main features, but can vary considerably in details across individuals, gender, race, or due to facial expression. We propose a novel method for 3D shape recovery of faces that exploits the similarity of faces. Our method obtains as input a single image and uses a mere single 3D reference model of a different person's face. Classical reconstruction methods from single images, i.e., shape-from-shading, require knowledge of the reflectance properties and lighting as well as depth values for boundary conditions. Recent methods circumvent these requirements by representing input faces as combinations (of hundreds) of stored 3D models. We propose instead to use the input image as a guide to \"mold” a single reference model to reach a reconstruction of the sought 3D shape. Our method assumes Lambertian reflectance and uses harmonic representations of lighting. It has been tested on images taken under controlled viewing conditions as well as on uncontrolled images downloaded from the Internet, demonstrating its accuracy and robustness under a variety of imaging conditions and overcoming significant differences in shape between the input and reference individuals including differences in facial expressions, gender, and race.", "We present a data-driven method for estimating the 3D shapes of faces viewed in single, unconstrained photos (aka \"in-the-wild\"). Our method was designed with an emphasis on robustness and efficiency - with the explicit goal of deployment in real-world applications which reconstruct and display faces in 3D. Our key observation is that for many practical applications, warping the shape of a reference face to match the appearance of a query, is enough to produce realistic impressions of the query's 3D shape. Doing so, however, requires matching visual features between the (possibly very different) query and reference images, while ensuring that a plausible face shape is produced. To this end, we describe an optimization process which seeks to maximize the similarity of appearances and depths, jointly, to those of a reference model. We describe our system for monocular face shape reconstruction and present both qualitative and quantitative experiments, comparing our method against alternative systems, and demonstrating its capabilities. Finally, as a testament to its suitability for real-world applications, we offer an open, on-line implementation of our system, providing unique means of instant 3D viewing of faces appearing in web photos.", "" ] }
1611.05053
2951152493
Reconstructing the detailed geometric structure of a face from a given image is a key to many computer vision and graphics applications, such as motion capture and reenactment. The reconstruction task is challenging as human faces vary extensively when considering expressions, poses, textures, and intrinsic geometries. While many approaches tackle this complexity by using additional data to reconstruct the face of a single subject, extracting facial surface from a single image remains a difficult problem. As a result, single-image based methods can usually provide only a rough estimate of the facial geometry. In contrast, we propose to leverage the power of convolutional neural networks to produce a highly detailed face reconstruction from a single image. For this purpose, we introduce an end-to-end CNN framework which derives the shape in a coarse-to-fine fashion. The proposed architecture is composed of two main blocks, a network that recovers the coarse facial geometry (CoarseNet), followed by a CNN that refines the facial features of that geometry (FineNet). The proposed networks are connected by a novel layer which renders a depth image given a mesh in 3D. Unlike object recognition and detection problems, there are no suitable datasets for training CNNs to perform face geometry reconstruction. Therefore, our training regime begins with a supervised phase, based on synthetic images, followed by an unsupervised phase that uses only unconstrained facial images. The accuracy and robustness of the proposed model is demonstrated by both qualitative and quantitative evaluation tests.
A different approach to the problem uses some form of regression to connect between the input image and the reconstruction representation. Some methods apply a regression model from a set of sparse landmarks @cite_11 @cite_48 @cite_8 , while others apply a regression on features derived from the image @cite_15 @cite_7 . @cite_19 applies a joint optimization process that ties the sparse landmarks with the face geometry, recovering both. Recently, a network was proposed to directly reconstruct the geometry from the image @cite_1 , without using sparse information or explicit features. That paper demonstrated the potential of using a network for face reconstruction. Still, it required external procedures for fine details extraction as well as initial guess of the face location, size, and pose.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_48", "@cite_1", "@cite_19", "@cite_15", "@cite_11" ], "mid": [ "2162267177", "2276844532", "", "2519131448", "2520331172", "2117113028", "2115807037" ], "abstract": [ "In this paper, we apply partial least squares (PLS) regression to predict 3D face shape from a single image. PLS describes the relationship between independent (intensity images) and dependent (3D shape) variables by seeking directions in the space of the independent variables that are associated with high variations in the dependent variables. We exploit this idea to construct statistical models of intensity and 3D shape that express strongly linked variations in both spaces. The outcome of this decomposition is the construction of two different models which express coupled variations in 3D shape and intensity. Using the intensity model, a set of parameters is obtained from out-of-training intensity examples. These intensity parameters can then be used directly in the 3D shape model to approximate facial shape. Experiments show that prediction is achieved with reasonable accuracy.", "State-of-the-art methods reconstruct three-dimensional (3D) face shapes from a single image by fitting 3D face models to input images or by directly learning mapping functions between two-dimensional (2D) images and 3D faces. However, they are often difficult to use in real-world applications due to expensive online optimization or to the requirement of frontal face images. This paper approaches the 3D face reconstruction problem as a regression problem rather than a model fitting problem. Given an input face image along with some pre-defined facial landmarks on it, a series of shape adjustments to the initial 3D face shape are computed through cascaded regressors based on the deviations between the input landmarks and the landmarks obtained from the reconstructed 3D faces. The cascaded regressors are offline learned from a set of 3D faces and their corresponding 2D face images in various views. By treating the landmarks that are invisible in large view angles as missing data, the proposed method can handle arbitrary view face images in a unified way with the same regressors. Experiments on the BFM and Bosphorus databases demonstrate that the proposed method can reconstruct 3D faces from arbitrary view images more efficiently and more accurately than existing methods.", "", "Fast and robust three-dimensional reconstruction of facial geometric structure from a single image is a challenging task with numerous applications. Here, we introduce a learning-based approach for reconstructing a three-dimensional face from a single image. Recent face recovery methods rely on accurate localization of key characteristic points. In contrast, the proposed approach is based on a Convolutional-Neural-Network (CNN) which extracts the face geometry directly from its image. Although such deep architectures outperform other models in complex computer vision problems, training them properly requires a large dataset of annotated examples. In the case of three-dimensional faces, currently, there are no large volume data sets, while acquiring such big-data is a tedious task. As an alternative, we propose to generate random, yet nearly photo-realistic, facial images for which the geometric form is known. The suggested model successfully recovers facial shapes from real images, even for faces with extreme expressions and under various lighting conditions.", "We present an approach to simultaneously solve the two problems of face alignment and 3D face reconstruction from an input 2D face image of arbitrary poses and expressions. The proposed method iteratively and alternately applies two sets of cascaded regressors, one for updating 2D landmarks and the other for updating reconstructed pose-expression-normalized (PEN) 3D face shape. The 3D face shape and the landmarks are correlated via a 3D-to-2D mapping matrix. In each iteration, adjustment to the landmarks is firstly estimated via a landmark regressor, and this landmark adjustment is also used to estimate 3D face shape adjustment via a shape regressor. The 3D-to-2D mapping is then computed based on the adjusted 3D face shape and 2D landmarks, and it further refines the 2D landmarks. An effective algorithm is devised to learn these regressors based on a training dataset of pairing annotated 3D face shapes and 2D face images. Compared with existing methods, the proposed method can fully automatically generate PEN 3D face shapes in real time from a single 2D face image and locate both visible and invisible 2D landmarks. Extensive experiments show that the proposed method can achieve the state-of-the-art accuracy in both face alignment and 3D face reconstruction, and benefit face recognition owing to its reconstructed PEN 3D face shapes.", "In this paper, we propose a new approach for face shape recovery from a single image. A single near infrared (NIR) image is used as the input, and a mapping from the NIR tensor space to 3D tensor space, learned by using statistical learning, is used for the shape recovery. In the learning phase, the two tensor models are constructed for NIR and 3D images respectively, and a canonical correlation analysis (CCA) based multi-variate mapping from NIR to 3D faces is learned from a given training set of NIR-3D face pairs. In the reconstruction phase, given an NIR face image, the depth map is computed directly using the learned mapping with the help of tensor models. Experimental results are provided to evaluate the accuracy and speed of the method. The work provides a practical solution for reliable and fast shape recovery and modeling of 3D objects.", "In this paper, we present a robust and efficient method to statistically recover the full 3D shape and texture of faces from single 2D images. We separate shape and texture recovery into two linear problems. For shape recovery, we learn empirically the generalization error of a 3D morphable model using out-of-sample data. We use this to predict the 2D variance associated with a sparse set of 2D feature points. This knowledge is incorporated into a parameter-free probabilistic framework which allows 3D shape recovery of a face in an arbitrary pose in a single step. Under the assumption of diffuseonly reflectance, we also show how photometric invariants can be used to recover texture parameters in an illumination insensitive manner. We present empirical results with comparison to the state-of-the-art analysis-by-synthesis methods and show an application of our approach to adjusting the pose of subjects in oil paintings." ] }
1611.04660
2574301230
Our aging population increasingly suffers from multiple chronic diseases simultaneously, necessitating the comprehensive treatment of these conditions. Finding the optimal set of drugs for a combinatorial set of diseases is a combinatorial pattern exploration problem. Association rule mining is a popular tool for such problems, but the requirement of health care for finding causal, rather than associative, patterns renders association rule mining unsuitable. To address this issue, we propose a novel framework based on the Rubin-Neyman causal model for extracting causal rules from observational data, correcting for a number of common biases. Specifically, given a set of interventions and a set of items that define subpopulations (e.g., diseases), we wish to find all subpopulations in which effective intervention combinations exist and in each such subpopulation, we wish to find all intervention combinations such that dropping any intervention from this combination will reduce the efficacy of the treatment. A key aspect of our framework is the concept of closed intervention sets which extend the concept of quantifying the effect of a single intervention to a set of concurrent interventions. We also evaluated our causal rule mining framework on the Electronic Health Records (EHR) data of a large cohort of patients from Mayo Clinic and showed that the patterns we extracted are sufficiently rich to explain the controversial findings in the medical literature regarding the effect of a class of cholesterol drugs on Type-II Diabetes Mellitus (T2DM).
Knowing the correct graph structure is important, because substructures in the graph are suggestive of sources of bias. To correct for biases, we are looking for specific substructures. For example, causal chains can be sources of overcorrection bias and "V"-shaped structures can be indicative of confounding or endogenous selection bias @cite_13 . Many other interesting substructures have been studied @cite_5 @cite_16 @cite_11 . In our work, we consider three fundamental such structures: direct causal effect, indirect causal effect and confounding. Of these, confounding is the most severe and received the most research interest.
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "1596022446", "1961009203", "2009187570", "1845768777" ], "abstract": [ "This paper presents a simple, efficient computer-based method for discovering causal relationships from databases that contain observational data. Observational data is passively observed, as contrasted with experimental data. Most of the databases available for data mining are observational. There is great potential for mining such databases to discover causal relationships. We illustrate how observational data can constrain the causal relationships among measured variables, sometimes to the point that we can conclude that one variable is causing another variable. The presentation here is based on a constraint-based approach to causal discovery. A primary purpose of this paper is to present the constraint-based causal discovery method in the simplest possible fashion in order to (1) readily convey the basic ideas that underlie more complex constraint-based causal discovery techniques, and (2) permit interested readers to rapidly program and apply the method to their own databases, as a start toward using more elaborate causal discovery algorithms.", "Mining for association rules in market basket data has proved a fruitful area of research. Measures such as conditional probability (confidence) and correlation have been used to infer rules of the form “the existence of item A implies the existence of item B.” However, such rules indicate only a statistical relationship between A and B. They do not specify the nature of the relationship: whether the presence of A causes the presence of B, or the converse, or some other attribute or phenomenon causes both to appear together. In applications, knowing such causal relationships is extremely useful for enhancing understanding and effecting change. While distinguishing causality from correlation is a truly difficult problem, recent work in statistics and Bayesian learning provide some avenues of attack. In these fields, the goal has generally been to learn complete causal models, which are essentially impossible to learn in large-scale data mining applications with a large number of variables. In this paper, we consider the problem of determining casual relationships, instead of mere associations, when mining market basket data. We identify some problems with the direct application of Bayesian learning ideas to mining large databases, concerning both the scalability of algorithms and the appropriateness of the statistical techniques, and introduce some initial ideas for dealing with these problems. We present experimental results from applying our algorithms on several large, real-world data sets. The results indicate that the approach proposed here is both computationally feasible and successful in identifying interesting causal structures. An interesting outcome is that it is perhaps easier to infer the lack of causality than to infer causality, information that is useful in preventing erroneous decision making.", "In observational studies with exposures or treatments that vary over time, standard approaches for adjustment of confounding are biased when there exist time-dependent confounders that are also affected by previous treatment. This paper introduces marginal structural models, a new class of causal mo", "There are several existing algorithms that under appropriate assumptions can reliably identify a subset of the underlying causal relationships from observational data. This paper introduces the first computationally feasible score-based algorithm that can reliably identify causal relationships in the large sample limit for discrete models, while allowing for the possibility that there are unobserved common causes. In doing so, the algorithm does not ever need to assign scores to causal structures with unobserved common causes. The algorithm is based on the identification of so called Y substructures within Bayesian network structures that can be learned from observational data. An example of a Y substructure is A -> C, B -> C, C -> D. After providing background on causal discovery, the paper proves the conditions under which the algorithm is reliable in the large sample limit." ] }
1611.04660
2574301230
Our aging population increasingly suffers from multiple chronic diseases simultaneously, necessitating the comprehensive treatment of these conditions. Finding the optimal set of drugs for a combinatorial set of diseases is a combinatorial pattern exploration problem. Association rule mining is a popular tool for such problems, but the requirement of health care for finding causal, rather than associative, patterns renders association rule mining unsuitable. To address this issue, we propose a novel framework based on the Rubin-Neyman causal model for extracting causal rules from observational data, correcting for a number of common biases. Specifically, given a set of interventions and a set of items that define subpopulations (e.g., diseases), we wish to find all subpopulations in which effective intervention combinations exist and in each such subpopulation, we wish to find all intervention combinations such that dropping any intervention from this combination will reduce the efficacy of the treatment. A key aspect of our framework is the concept of closed intervention sets which extend the concept of quantifying the effect of a single intervention to a set of concurrent interventions. We also evaluated our causal rule mining framework on the Electronic Health Records (EHR) data of a large cohort of patients from Mayo Clinic and showed that the patterns we extracted are sufficiently rich to explain the controversial findings in the medical literature regarding the effect of a class of cholesterol drugs on Type-II Diabetes Mellitus (T2DM).
Applications of causal modeling is not exclusive to social and life sciences. In data mining, @cite_22 investigated the causal effect of new features on click through rates and @cite_8 used doubly robust estimation techniquest to determine the efficacy of display advertisements.
{ "cite_N": [ "@cite_22", "@cite_8" ], "mid": [ "1990966354", "2121878111" ], "abstract": [ "Online search systems that display ads continually offer new features that advertisers can use to fine-tune and enhance their ad campaigns. An important question is whether a new feature actually helps advertisers. In an ideal world for statisticians, we would answer this question by running a statistically designed experiment. But that would require randomly choosing a set of advertisers and forcing them to use the feature, which is not realistic. Accordingly, in the real world, new features for advertisers are seldom evaluated with a traditional experimental protocol. Instead, customer service representatives select advertisers who are invited to be among the first to test a new feature (i.e., white-listed), and then each white-listed advertiser chooses whether or not to use the new feature. Neither the customer service representative nor the advertiser chooses at random. This paper addresses the problem of drawing valid inferences from whitelist trials about the effects of new features on advertiser happiness. We are guided by three principles. First, statistical procedures for whitelist trials are likely to be applied in an automated way, so they should be robust to violations of modeling assumptions. Second, standard analysis tools should be preferred over custom-built ones, both for clarity and for robustness. Standard tools have withstood the test of time and have been thoroughly debugged. Finally, it should be easy to compute reliable confidence intervals for the estimator. We review an estimator that has all these attributes, allowing us to make valid inferences about the effects of a new feature on advertiser happiness.", "Display ads proliferate on the web, but are they effective? Or are they irrelevant in light of all the other advertising that people see? We describe a way to answer these questions, quickly and accurately, without randomized experiments, surveys, focus groups or expert data analysts. Doubly robust estimation protects against the selection bias that is inherent in observational data, and a nonparametric test that is based on irrelevant outcomes provides further defense. Simulations based on realistic scenarios show that the resulting estimates are more robust to selection bias than traditional alternatives, such as regression modeling or propensity scoring. Moreover, computations are fast enough that all processing, from data retrieval through estimation, testing, validation and report generation, proceeds in an automated pipeline, without anyone needing to see the raw data." ] }
1611.04660
2574301230
Our aging population increasingly suffers from multiple chronic diseases simultaneously, necessitating the comprehensive treatment of these conditions. Finding the optimal set of drugs for a combinatorial set of diseases is a combinatorial pattern exploration problem. Association rule mining is a popular tool for such problems, but the requirement of health care for finding causal, rather than associative, patterns renders association rule mining unsuitable. To address this issue, we propose a novel framework based on the Rubin-Neyman causal model for extracting causal rules from observational data, correcting for a number of common biases. Specifically, given a set of interventions and a set of items that define subpopulations (e.g., diseases), we wish to find all subpopulations in which effective intervention combinations exist and in each such subpopulation, we wish to find all intervention combinations such that dropping any intervention from this combination will reduce the efficacy of the treatment. A key aspect of our framework is the concept of closed intervention sets which extend the concept of quantifying the effect of a single intervention to a set of concurrent interventions. We also evaluated our causal rule mining framework on the Electronic Health Records (EHR) data of a large cohort of patients from Mayo Clinic and showed that the patterns we extracted are sufficiently rich to explain the controversial findings in the medical literature regarding the effect of a class of cholesterol drugs on Type-II Diabetes Mellitus (T2DM).
Even extending association rules mining to causal rule mining has been attempted before @cite_24 @cite_25 @cite_30 . @cite_24 used odds ratio to identify causal patterns and later extended their technique @cite_30 to handle large data set. Their technique, however, is not rooted in a causal model and hence offers no protection against computing systematically biased estimates. In their proposed causal decision trees @cite_23 , they used the potential outcome framework, but still have not addressed correction for various biases, including confounding.
{ "cite_N": [ "@cite_24", "@cite_30", "@cite_25", "@cite_23" ], "mid": [ "1992862710", "1957379630", "", "1901355268" ], "abstract": [ "Discovering causal relationships is the ultimate goal of many scientific explorations. Causal relationships can be identified with controlled experiments, but such experiments are often very expensive and sometimes impossible to conduct. On the other hand, the collection of observational data has increased dramatically in recent decades. Therefore it is desirable to find causal relationships from the data directly. Significant progress has been made in the field of discovering causal relationships using the Causal Bayesian Network (CBN) theory. The applications of CBNs, however, are greatly limited due to the high computational complexity. In another direction, association rule mining has been shown to be an efficient data mining means for relationship discovery. However, although causal relationships imply associations, the reverse does not always hold. In this paper we study how to use an efficient association mining approach to discover potential causal rules in observational data. We make use of the idea of retrospective cohort studies, a widely used approach in medical and social research, to detect causal association rules. In comparison with the constraint-based methods within the CBN paradigm, the proposed approach is faster and is capable of finding a cause consisting of combined variables.", "Randomised controlled trials (RCTs) are the most effective approach to causal discovery, but in many circumstances it is impossible to conduct RCTs. Therefore, observational studies based on passively observed data are widely accepted as an alternative to RCTs. However, in observational studies, prior knowledge is required to generate the hypotheses about the cause-effect relationships to be tested, and hence they can only be applied to problems with available domain knowledge and a handful of variables. In practice, many datasets are of high dimensionality, which leaves observational studies out of the opportunities for causal discovery from such a wealth of data sources. In another direction, many efficient data mining methods have been developed to identify associations among variables in large datasets. The problem is that causal relationships imply associations, but the reverse is not always true. However, we can see the synergy between the two paradigms here. Specifically, association rule mining can be used to deal with the high-dimensionality problem, whereas observational studies can be utilised to eliminate noncausal associations. In this article, we propose the concept of causal rules (CRs) and develop an algorithm for mining CRs in large datasets. We use the idea of retrospective cohort studies to detect CRs based on the results of association rule mining. Experiments with both synthetic and real-world datasets have demonstrated the effectiveness and efficiency of CR mining. In comparison with the commonly used causal discovery methods, the proposed approach generally is faster and has better or competitive performance in finding correct or sensible causes. It is also capable of finding a cause consisting of multiple variables—a feature that other causal discovery methods do not possess.", "", "Uncovering causal relationships in data is a major objective of data analytics. Currently, there is a need for scalable and automated methods for causal relationship exploration in data. Classification methods are fast and they could be practical substitutes for finding causal signals in data. However, classification methods are not designed for causal discovery and a classification method may find false causal signals and miss the true ones. In this paper, we develop a causal decision tree (CDT) where nodes have causal interpretations. Our method follows a well-established causal inference framework and makes use of a classic statistical test to establish the causal relationship between a predictor variable and the outcome variable. At the same time, by taking the advantages of normal decision trees, a CDT provides a compact graphical representation of the causal relationships, and the construction of a CDT is fast as a result of the divide and conquer strategy employed, making CDTs practical for representing and finding causal signals in large data sets. Experiment results demonstrate that CDTs can identify meaningful causal relationships and the CDT algorithm is scalable." ] }
1611.04748
2575875755
The millimeter wave (mmWave) bands offer the possibility of orders of magnitude greater throughput for fifth-generation (5G) cellular systems. However, since mmWave signals are highly susceptible to blockage, channel quality on any one mmWave link can be extremely intermittent. This paper implements a novel dual connectivity protocol that enables mobile user equipment devices to maintain physical layer connections to 4G and 5G cells simultaneously. A novel uplink control signaling system combined with a local coordinator enables rapid path switching in the event of failures on any one link. This paper provides the first comprehensive end-to-end evaluation of handover mechanisms in mmWave cellular systems. The simulation framework includes detailed measurement-based channel models to realistically capture spatial dynamics of blocking events, as well as the full details of Medium Access Control, Radio Link Control, and transport protocols. Compared with conventional handover mechanisms, this paper reveals significant benefits of the proposed method under several metrics.
Dual connectivity to different types of cells (e.g., macro and pico cells) has been proposed in Release @math of Long Term Evolution-Advanced (LTE-A) @cite_1 and in @cite_32 . However, these systems were designed for conventional sub-6 GHz frequencies, and the directionality and variability of the channels typical of mmWave frequencies were not addressed. Some other previous works, such as @cite_3 , consider only the bands under @math GHz for the control channel of 5G networks, to provide robustness against blockage and a wider coverage range, but this solution could not provide the high capacities that can be obtained when exploiting mmWave frequencies. The potential of combining legacy and mmWave technologies in outdoor scenarios has also been investigated in @cite_13 , highlighting the significant benefits that a mmWave network achieves with flexible, dynamic support from LTE technologies. Articles @cite_11 @cite_16 propose a multi-connectivity framework as a solution for mobility-related link failures and throughput degradation of cell-edge users, enabling increased reliability with different levels of mobility.
{ "cite_N": [ "@cite_1", "@cite_32", "@cite_3", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "", "2044453454", "", "2289510574", "2510937514", "2499470142" ], "abstract": [ "", "Recently, a new network architecture with split control-plane and user-plane has been proposed and gained a lot of momentum in the standardisation of Long Term Evolution (LTE) Release 12. In this new network architecture, the control-plane, which transmits system information and handles user connectivity, and the user-plane, which manages user data, are split and no longer transmitted necessarily by the same network node. This dual connectivity confers a large flexibility to the system, and allows for a more energy efficient operation and enhanced mobility management. In this paper, we present a detailed description of our dual connectivity framework based on the latest LTE-Advanced enhancements, in which macrocell-assisted (MA) small cells use different channel state information-reference signals (CSI-RS) to differentiate among each other and allow User Equipment (UE) to take adequate measurements for cell (re)selection. Taking into account the limited number of available CSI-RSs, we study the assignment problem of CSI-RSs to MA small cells, analyse CSI-RS collision and confusion issues and present simulation results to demonstrate the flexibility of the proposed network architecture.", "", "Ultra-high reliable communication and improved capacity are some of the major requirements of the 5th generation (5G) mobile and wireless networks. Achieving the aforementioned requirements necessitates avoiding radio link failures and the service interruption that occurs during the failures and their re-establishment procedures. Moreover, the latency associated with packet forwarding in classical handover procedures should be resolved. This paper proposes a multi-connectivity concept for a cloud radio access network as a solution for mobility related link failures and throughput degradation of cell-edge users. The concept relies on the fact that the transmissions from co-operating cells are co-ordinated for both data and control signals. Latency incurred due to classical handover procedures will be inherently resolved in the proposed multi-connectivity scheme. Simulation results are shown for a stand alone ultra dense small cells that use the same carrier frequency. It is shown that the number of mobility failures can considerably be decreased without a loss in the throughput performance gain of cell-edge users.", "5G is expected to provide a unified platform where a number of different frequency bands and technologies are strategically integrated and combined. In this paper we investigate the potential of combining micro-wave (microWave) and millimeter-wave (mmWave) technologies in outdoor scenarios. We first envision the design and architecture of a novel 5G microWave mmWave Heterogeneous Network (HetNet) where the mmWave backhaul is integrated. Next, we discuss a Service-Driven Dynamic Resource Radio Management system for the proposed architecture and propose a Multi-Layer Dynamic Transmission Scheme, which enables cooperation between different network slices, increasing the degrees of freedom of the overall system. Finally, we present a preliminary analytical and experimental study of the performance of the proposed 5G microWave mmWave HetNet, highlighting the significant benefits that a mmWave network achieves with flexible, dynamic support from microWave technologies.", "The 5th generation of mobile networks is envisioned to unify different access types under one system in order to enable efficient and performant operations. We propose a radio network architecture for tight integration of multiple radio access technologies supporting traffic steering, link selection, and aggregation of traffic flows from and to different sources. This enables the radio network architecture to support better throughput, and increased reliability with different levels of mobility. Specifically, we propose a common user plane and control plane across different radio technologies utilizing similar principles, such that the joint operation of radio technologies can be optimized." ] }
1611.04748
2575875755
The millimeter wave (mmWave) bands offer the possibility of orders of magnitude greater throughput for fifth-generation (5G) cellular systems. However, since mmWave signals are highly susceptible to blockage, channel quality on any one mmWave link can be extremely intermittent. This paper implements a novel dual connectivity protocol that enables mobile user equipment devices to maintain physical layer connections to 4G and 5G cells simultaneously. A novel uplink control signaling system combined with a local coordinator enables rapid path switching in the event of failures on any one link. This paper provides the first comprehensive end-to-end evaluation of handover mechanisms in mmWave cellular systems. The simulation framework includes detailed measurement-based channel models to realistically capture spatial dynamics of blocking events, as well as the full details of Medium Access Control, Radio Link Control, and transport protocols. Compared with conventional handover mechanisms, this paper reveals significant benefits of the proposed method under several metrics.
Although the literature on handover in more traditional sub-6 GHz heterogeneous networks is quite mature, papers on handover management for mmWave 5G cellular are very recent, and research in this field has just started. The survey in @cite_9 presents multiple vertical handover decision algorithms that are essential for heterogeneous wireless networks, while article @cite_28 investigates the management of the handover process between macro, femto and pico cells, proposing a theoretical model to characterize the performance of a mobile user in heterogeneous scenarios as a function of various handover parameters. However, these works are focused on low frequency legacy cellular systems. When dealing with mmWaves, frequent handover, even for fixed UEs, is a potential drawback that needs to be addressed. In @cite_5 , the handover rate in 5G systems is investigated and in @cite_36 a scheme for handover management in high-speed railway is proposed by employing the received signal quality from measurement reports. In @cite_19 @cite_41 the impact of user mobility in multi-tier heterogeneous networks is analyzed and a framework is proposed to solve the dynamic admission and mobile association problem in a wireless system with mobility. Finally, the authors of @cite_44 present an architecture for mobility, handover and routing management.
{ "cite_N": [ "@cite_28", "@cite_36", "@cite_9", "@cite_41", "@cite_44", "@cite_19", "@cite_5" ], "mid": [ "2295761692", "1967508151", "2138856636", "2121958329", "2066940939", "1973938593", "1986220933" ], "abstract": [ "Next generation cellular systems are expected to entail a wide variety of wireless coverage zones, with cells of different sizes and capacities that can overlap in space and share the transmission resources. In this scenario, which is referred to as Heterogeneous Networks (HetNets), a fundamental challenge is the management of the handover process between macro, femto and pico cells. To limit the number of handovers and the signaling between the cells, it will hence be crucial to manage the user’s mobility considering the context parameters, such as cells size, traffic loads, and user velocity. In this paper, we propose a theoretical model to characterize the performance of a mobile user in a HetNet scenario as a function of the user’s mobility, the power profile of the neighboring cells, the handover parameters, and the traffic load of the different cells. We propose a Markov-based framework to model the handover process for the mobile user, and derive an optimal context-dependent handover criterion. The mathematical model is validated by means of simulations, comparing the performance of our strategy with conventional handover optimization techniques in different scenarios. Finally, we show the impact of the handover regulation on the users performance and how it is possible to improve the users capacity exploiting context information.", "Being a promising technology for fifth-generation (5G) communication systems, a novel railway communication system based on control user (C U) plane split heterogeneous networks can provide a high-quality broadband wireless service for passengers in high-speed railways with higher system capacity, better transmission reliability, and less cochannel interference. However, due to its special architecture where the C-plane and the U-plane must be split and supported by a macro Evolved Node B (eNB) and a phantom eNB, respectively, it would suffer more serious handover problem, particularly in intermacrocell handover, which directly degrades its applicability and availability in high-speed railways. Moreover, no technical specification has been released about this network architecture. Therefore, this paper focuses on redesigning and analyzing technical details and handover procedures based on Long-Term Evolution (LTE) specifications to guarantee the proposed system's practicability and generality and its analytical tractability. To resolve the handover problem, this paper proposes a handover trigger decision scheme based on GM(1, @math ) model of the grey system theory. By this scheme, the received signal quality from the @math th measurement report can be predicted from the @math th measurement period, and the predicted values can be then utilized to make the handover trigger decision. The simulation results show that the proposed scheme is capable of triggering handover in advance effectively and of enhancing handover success probability remarkably.", "Vertical handover decision (VHD) algorithms are essential components of the architecture of the forthcoming Fourth Generation (4G) heterogeneous wireless networks. These algorithms need to be designed to provide the required Quality of Service (QoS) to a wide range of applications while allowing seamless roaming among a multitude of access network technologies. In this paper, we present a comprehensive survey of the VHD algorithms designed to satisfy these requirements. To offer a systematic comparison, we categorize the algorithms into four groups based on the main handover decision criterion used. Also, to evaluate tradeoffs between their complexity of implementation and efficiency, we discuss three representative VHD algorithms in each group.", "In this paper, we deal with a dynamic and stochastic admission control and mobile association problem in an heterogeneous wireless network. We extend the usual problem by adding mobility features described by a Markov Modulated Poisson Process. The aim is to optimize the average performance of the system. This dynamic control problem is modeled and solved using a Semi Markov Decision Process (SMDP) framework. We then assess the impact of the mobility and show that (i) our network centric approach outperforms a simple user centric algorithm and (ii) mobility improves the performance of the system when optimal policy of the problem is used.", "The tremendous growth in wireless Internet use is showing no signs of slowing down. Existing cellular networks are starting to be insufficient in meeting this demand, in part due to their inflexible and expensive equipment as well as complex and non-agile control plane. Software-defined networking is emerging as a natural solution for next generation cellular networks as it enables further network function virtualization opportunities and network programmability. In this article, we advocate an all-SDN network architecture with hierarchical network control capabilities to allow for different grades of performance and complexity in offering core network services and provide service differentiation for 5G systems. As a showcase of this architecture, we introduce a unified approach to mobility, handoff, and routing management and offer connectivity management as a service (CMaaS). CMaaS is offered to application developers and over-the-top service providers to provide a range of options in protecting their flows against subscriber mobility at different price levels.", "This paper analyzes the impact of user mobility in multi-tier heterogeneous networks. We begin by obtaining the handoff rate for a mobile user in an irregular cellular network with the access point locations modeled as a homogeneous Poisson point process. The received signal-to-interference-ratio (SIR) distribution along with a chosen SIR threshold is then used to obtain the probability of coverage. To capture potential connection failures due to mobility, we assume that a fraction of handoffs result in such failures. Considering a multi-tier network with orthogonal spectrum allocation among tiers and the maximum biased average received power as the tier association metric, we derive the probability of coverage for two cases: 1) the user is stationary (i.e., handoffs do not occur, or the system is not sensitive to handoffs); 2) the user is mobile, and the system is sensitive to handoffs. We derive the optimal bias factors to maximize the coverage. We show that when the user is mobile, and the network is sensitive to handoffs, both the optimum tier association and the probability of coverage depend on the user's speed; a speed-dependent bias factor can then adjust the tier association to effectively improve the coverage, and hence system performance, in a fully-loaded network.", "Millimeterwave band is a promising candidate for 5th generation wireless access technology to deliver peak and cell-edge data rates of the order of 10 Gbps and 100 Mbps, respectively, and to meet the future capacity demands. The main advantages of the millimeterwave band are availability of large blocks of contiguous bandwidth and the opportunity of using large antenna arrays composed of very small antenna elements to provide large antenna gains. The line-of-sight operation requirement in this band, due to its unique propagation characteristics, makes it necessary to build the network with enough redundancy of access points and the users may have to frequently handoff from one access point to another whenever its radio link is disrupted by obstacles. In this paper we investigate the handoff rate in such an access network. Based on analysis of various deployment scenarios, we observe that, typical average handoff interval is several seconds, although for certain types of user actions the average handoff interval can be as low as 0.75 sec." ] }
1611.04666
2949420106
In recent years, interest in recommender research has shifted from explicit feedback towards implicit feedback data. A diversity of complex models has been proposed for a wide variety of applications. Despite this, learning from implicit feedback is still computationally challenging. So far, most work relies on stochastic gradient descent (SGD) solvers which are easy to derive, but in practice challenging to apply, especially for tasks with many items. For the simple matrix factorization model, an efficient coordinate descent (CD) solver has been previously proposed. However, efficient CD approaches have not been derived for more complex models. In this paper, we provide a new framework for deriving efficient CD algorithms for complex recommender models. We identify and introduce the property of k-separable models. We show that k-separability is a sufficient property to allow efficient optimization of implicit recommender problems with CD. We illustrate this framework on a variety of state-of-the-art models including factorization machines and Tucker decomposition. To summarize, our work provides the theory and building blocks to derive efficient implicit CD algorithms for complex recommender models.
Our discussion so far was focused on learning matrix factorization models from implicit data. Shifting from simple matrix factorization to more complex factorization models has shown large success in many implicit recommendation problems @cite_12 @cite_3 @cite_18 @cite_6 @cite_7 @cite_25 . However, work on complex factorization models relies almost exclusively on SGD optimization using the generic BPR framework. Our work, provides the theory as well as a practical framework for deriving CD learners for such complex models. Like CD for MF, our generic algorithm is able to optimize on all non-consumed items without explicitly iterating over them. To summarize, our paper enables researchers and practitioners to apply CD in their work and gives them a choice between the advantages of BPR and CD.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_3", "@cite_6", "@cite_25", "@cite_12" ], "mid": [ "2040107208", "153313452", "2268318962", "1546409232", "2010187764", "2102982709" ], "abstract": [ "Many websites provide commenting facilities for users to express their opinions or sentiments with regards to content items, such as, videos, news stories, blog posts, etc. Previous studies have shown that user comments contain valuable information that can provide insight on Web documents and may be utilized for various tasks. This work presents a model that predicts, for a given user, suitable news stories for commenting. The model achieves encouraging results regarding the ability to connect users with stories they are likely to comment on. This provides grounds for personalized recommendations of stories to users who may want to take part in their discussion. We combine a content-based approach with a collaborative-filtering approach (utilizing users' co-commenting patterns) in a latent factor modeling framework. We experiment with several variations of the model's loss function in order to adjust it to the problem domain. We evaluate the results on two datasets and show that employing co-commenting patterns improves upon using content features alone, even with as few as two available comments per story. Finally, we try to incorporate available social network data into the model. Interestingly, the social data does not lead to substantial performance gains, suggesting that the value of social data for this task is quite negligible.", "One-class collaborative filtering or collaborative ranking with implicit feedback has been steadily receiving more attention, mostly due to the \"one-class\" characteristics of data in various services, e.g., \"like\" in Facebook and \"bought\" in Amazon. Previous works for solving this problem include pointwise regression methods based on absolute rating assumptions and pairwise ranking methods with relative score assumptions, where the latter was empirically found performing much better because it models users' ranking-related preferences more directly. However, the two fundamental assumptions made in the pairwise ranking methods, (1) individual pairwise preference over two items and (2) independence between two users, may not always hold. As a response, we propose a new and improved assumption, group Bayesian personalized ranking (GBPR), via introducing richer interactions among users. In particular, we introduce group preference, to relax the aforementioned individual and independence assumptions. We then design a novel algorithm correspondingly, which can recommend items more accurately as shown by various ranking-oriented evaluation metrics on four real-world datasets in our experiments.", "Modern recommender systems model people and items by discovering or teasing apart' the underlying dimensions that encode the properties of items and users' preferences toward them. Critically, such dimensions are uncovered based on user feedback, often in implicit form (such as purchase histories, browsing logs, etc.); in addition, some recommender systems make use of side information, such as product attributes, temporal information, or review text. However one important feature that is typically ignored by existing personalized recommendation and ranking methods is the visual appearance of the items being considered. In this paper we propose a scalable factorization model to incorporate visual signals into predictors of people's opinions, which we apply to a selection of large, real-world datasets. We make use of visual features extracted from product images using (pre-trained) deep networks, on top of which we learn an additional layer that uncovers the visual dimensions that best explain the variation in people's feedback. This not only leads to significantly more accurate personalized ranking methods, but also helps to alleviate cold start issues, and qualitatively to analyze the visual dimensions that influence people's opinions.", "Personalized point-of-interest (POI) recommendation is a significant task in location-based social networks (LBSNs) as it can help provide better user experience as well as enable third-party services, e.g., launching advertisements. To provide a good recommendation, various research has been conducted in the literature. However, pervious efforts mainly consider the \"check-ins\" in a whole and omit their temporal relation. They can only recommend POI globally and cannot know where a user would like to go tomorrow or in the next few days. In this paper, we consider the task of successive personalized POI recommendation in LBSNs, which is a much harder task than standard personalized POI recommendation or prediction. To solve this task, we observe two prominent properties in the check-in sequence: personalized Markov chain and region localization. Hence, we propose a novel matrix factorization method, namely FPMC-LR, to embed the personalized Markov chains and the localized regions. Our proposed FPMC-LR not only exploits the personalized Markov chain in the check-in sequence, but also takes into account users' movement constraint, i.e., moving around a localized region. More importantly, utilizing the information of localized regions, we not only reduce the computation cost largely, but also discard the noisy information to boost recommendation. Results on two real-world LBSNs datasets demonstrate the merits of our proposed FPMC-LR.", "Among different hybrid recommendation techniques, network-based entity recommendation methods, which utilize user or item relationship information, are beginning to attract increasing attention recently. Most of the previous studies in this category only consider a single relationship type, such as friendships in a social network. In many scenarios, the entity recommendation problem exists in a heterogeneous information network environment. Different types of relationships can be potentially used to improve the recommendation quality. In this paper, we study the entity recommendation problem in heterogeneous information networks. Specifically, we propose to combine heterogeneous relationship information for each user differently and aim to provide high-quality personalized recommendation results using user implicit feedback data and personalized recommendation models. In order to take full advantage of the relationship heterogeneity in information networks, we first introduce meta-path-based latent features to represent the connectivity between users and items along different types of paths. We then define recommendation models at both global and personalized levels and use Bayesian ranking optimization techniques to estimate the proposed models. Empirical studies show that our approaches outperform several widely employed or the state-of-the-art entity recommendation techniques.", "Cold-start scenarios in recommender systems are situations in which no prior events, like ratings or clicks, are known for certain users or items. To compute predictions in such cases, additional information about users (user attributes, e.g. gender, age, geographical location, occupation) and items (item attributes, e.g. genres, product categories, keywords) must be used. We describe a method that maps such entity (e.g. user or item) attributes to the latent features of a matrix (or higher-dimensional) factorization model. With such mappings, the factors of a MF model trained by standard techniques can be applied to the new-user and the new-item problem, while retaining its advantages, in particular speed and predictive accuracy. We use the mapping concept to construct an attribute-aware matrix factorization model for item recommendation from implicit, positive-only feedback. Experiments on the new-item problem show that this approach provides good predictive accuracy, while the prediction time only grows by a constant factor." ] }
1611.04230
2952138241
We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art. Our model has the additional advantage of being very interpretable, since it allows visualization of its predictions broken up by abstract features such as information content, salience and novelty. Another novel contribution of our work is abstractive training of our extractive model that can train on human generated reference summaries alone, eliminating the need for sentence-level extractive labels.
The work of @cite_4 also uses an encoder-decoder approach, but is fully abstractive in the sense that it generates its own summaries at test time. Our abstractive trainer comes close to their work, but only generates sentence-extraction probabilities at test time. We include comparison numbers with this work too, in the following section.
{ "cite_N": [ "@cite_4" ], "mid": [ "2963929190" ], "abstract": [ "In this work, we model abstractive text summarization using Attentional EncoderDecoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-toword structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research." ] }
1611.03942
2562391438
The problem of anomaly detection has been studied for a long time, and many Network Analysis techniques have been proposed as solutions. Although some results appear to be quite promising, no method is clearly to be superior to the rest. In this paper, we particularly consider anomaly detection in the Bitcoin transaction network. Our goal is to detect which users and transactions are the most suspicious; in this case, anomalous behavior is a proxy for suspicious behavior. To this end, we use the laws of power degree and densification and local outlier factor (LOF) method (which is proceeded by k-means clustering method) on two graphs generated by the Bitcoin transaction network: one graph has users as nodes, and the other has transactions as nodes. We remark that the methods used here can be applied to any type of setting with an inherent graph structure, including, but not limited to, computer networks, telecommunications networks, auction networks, security networks, social networks, Web networks, or any financial networks. We use the Bitcoin transaction network in this paper due to the availability, size, and attractiveness of the data set.
@cite_6 make use of clustering techniques to detect anomalies. The main idea is that these methods should be able to group normal users activities together and separate from abnormal ones. use k-means clustering, self-organizing maps, and the expected maximization algorithm to develop methods for the detection process. Motivated by this, we think we can have a good use of k-means clustering method on the Bitcoin dataset. However, we will not use k-means as a real method to detect anomalies because clustering in its deep sense is for grouping purposes. Instead, we will use it as a baseline model. Since we expect outliers (i.e. anomalies) to stay far away from the centroids found by @math -means, @math -means can be used to assess our true method. In the same sense, @math -means is helpful for visualization purposes. Most importantly, without @math -means to find the centroids, we cannot calculate LOF indices in the next part, which defines our notion of anomalies; the connection between @math -means clustering method and LOF method will be discussed in more details in the Methods section.
{ "cite_N": [ "@cite_6" ], "mid": [ "2020362899" ], "abstract": [ "K-means is a widely used partitional clustering method. While there are considerable research efforts to characterize the key features of K-means clustering, further investigation is needed to reveal whether and how the data distributions can have the impact on the performance of K-means clustering. Indeed, in this paper, we revisit the K-means clustering problem by answering three questions. First, how the \"true\" cluster sizes can make impact on the performance of K-means clustering? Second, is the entropy an algorithm-independent validation measure for K-means clustering? Finally, what is the distribution of the clustering results by K-means? To that end, we first illustrate that K-means tends to generate the clusters with the relatively uniform distribution on the cluster sizes. In addition, we show that the entropy measure, an external clustering validation measure, has the favorite on the clustering algorithms which tend to reduce high variation on the cluster sizes. Finally, our experimental results indicate that K-means tends to produce the clusters in which the variation of the cluster sizes, as measured by the Coefficient of Variation(CV), is in a specific range, approximately from 0.3 to 1.0." ] }
1611.03942
2562391438
The problem of anomaly detection has been studied for a long time, and many Network Analysis techniques have been proposed as solutions. Although some results appear to be quite promising, no method is clearly to be superior to the rest. In this paper, we particularly consider anomaly detection in the Bitcoin transaction network. Our goal is to detect which users and transactions are the most suspicious; in this case, anomalous behavior is a proxy for suspicious behavior. To this end, we use the laws of power degree and densification and local outlier factor (LOF) method (which is proceeded by k-means clustering method) on two graphs generated by the Bitcoin transaction network: one graph has users as nodes, and the other has transactions as nodes. We remark that the methods used here can be applied to any type of setting with an inherent graph structure, including, but not limited to, computer networks, telecommunications networks, auction networks, security networks, social networks, Web networks, or any financial networks. We use the Bitcoin transaction network in this paper due to the availability, size, and attractiveness of the data set.
@cite_2 propose the Local Outlier Factor (LOF) method to detect outliers in a dataset. This method relies on the concept of local density, with locality defined by @math nearest neighbors and density estimated by distances. Basically, they compare the local density of a point (a node) to that of its neighbors to identify regions of similar density and points that have substantially higher densities than their neighbors; these points are then labeled as outliers. We find this method suitable for our study because not only outliers can be understood as anomalies, but also we do not need labelled data to feed in the calculations. Thus, we will use this method as our main methodology to detect anomalies in the Bitcoin network. We then use k-means clustering results discussed above to verify our findings.
{ "cite_N": [ "@cite_2" ], "mid": [ "2144182447" ], "abstract": [ "For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical." ] }
1611.03895
2951371871
Internet eXchange Points (IXP) are critical components of the Internet infrastructure that affect its performance, evolution, security and economics. In this work, we introduce techniques to augment the well-known traceroute tool with the capability of identifying if and where exactly IXPs are crossed in endto- end paths. Knowing this information can help end-users have more transparency over how their traffic flows in the Internet. Our tool, called traIXroute, exploits data from the PeeringDB (PDB) and the Packet Clearing House (PCH) about IXP IP addresses of BGP routers, IXP members, and IXP prefixes. We show that the used data are both rich, i.e., we find 12,716 IP addresses of BGP routers in 460 IXPs, and mostly accurate, i.e., our validation shows 92-93 accuracy. In addition, 78.2 of the detected IXPs in our data are based on multiple diverse evidence and therefore help have higher confidence on the detected IXPs than when relying solely on IXP prefixes. To demonstrate the utility of our tool, we use it to show that one out of five paths in our data cross an IXP and that paths do not normally cross more than a single IXP, as it is expected based on the valley-free model about Internet policies. Furthermore, although the top IXPs both in terms of paths and members are located in Europe, US IXPs attract many more paths than their number of members indicates.
Previous studies have examined the problem of mapping traceroute paths to AS-level paths @cite_5 , @cite_4 . Mapping IP addresses to ASes is not straightforward because routers can reply with source IP addresses numbered from a third-party AS. These studies ignore hops with IXP IP addresses. These addresses are used to number BGP router interfaces connected to the IXP subnet and it is hard to identify to which AS they belong.
{ "cite_N": [ "@cite_5", "@cite_4" ], "mid": [ "2164511281", "2108673686" ], "abstract": [ "An accurate Internet topology graph is important in many areas of networking, from deciding ISP business relationships to diagnosing network anomalies. Most Internet mapping efforts have derived the network structure, at the level of interconnected autonomous systems (ASes), from a limited number of either BGP- or traceroute- based data sources. While techniques for charting the topology continue to improve, the growth of the number of vantage points is significantly outpaced by the rapid growth of the Internet. In this paper, we argue that a promising approach to revealing the hidden areas of the Internet topology is through active measurement from an observation platform that scales with the growing Internet. By leveraging measurements performed by an extension to a popular P2P system, we show that this approach indeed exposes significant new topological information. Based on traceroute measurements from more than 992,000 IPs in over 3,700 ASes distributed across the Internet hierarchy, our proposed heuristics identify 23,914 new AS links not visible in the publicly-available BGP data - 12.86 more customer-provider links and 40.99 more peering links, than previously reported. We validate our heuristics using data from a tier-1 ISP and show that they correctly filter out all false links introduced by public IP-to-AS mapping. We have made the identified set of links and their inferred relationships publically available", "Traceroute is widely used to detect routing problems, characterize end-to-end paths, and discover the Internet topology. Providing an accurate list of the Autonomous Systems (ASes) along the forwarding path would make traceroute even more valuable to researchers and network operators. However, conventional approaches to mapping traceroute hops to AS numbers are not accurate enough. Address registries are often incomplete and out-of-date. BGP routing tables provide a better IP-to-AS mapping, though this approach has significant limitations as well. Based on our extensive measurements, about 10 of the traceroute paths have one or more hops that do not map to a unique AS number, and around 15 of the traceroute AS paths have an AS loop. In addition, some traceroute AS paths have extra or missing AS hops due to Internet eXchange Points, sibling ASes managed by the same institution, and ASes that do not advertise routes to their infrastructure. Using the BGP tables as a starting point, we propose techniques for improving the IP-to-AS mapping as an important step toward an AS-level traceroute tool. Our algorithms draw on analysis of traceroute probes, reverse DNS lookups, BGP routing tables, and BGP update messages collected from multiple locations. We also discuss how the improved IP-to-AS mapping allows us to home in on cases where the BGP and traceroute AS paths differ for legitimate reasons." ] }
1611.03895
2951371871
Internet eXchange Points (IXP) are critical components of the Internet infrastructure that affect its performance, evolution, security and economics. In this work, we introduce techniques to augment the well-known traceroute tool with the capability of identifying if and where exactly IXPs are crossed in endto- end paths. Knowing this information can help end-users have more transparency over how their traffic flows in the Internet. Our tool, called traIXroute, exploits data from the PeeringDB (PDB) and the Packet Clearing House (PCH) about IXP IP addresses of BGP routers, IXP members, and IXP prefixes. We show that the used data are both rich, i.e., we find 12,716 IP addresses of BGP routers in 460 IXPs, and mostly accurate, i.e., our validation shows 92-93 accuracy. In addition, 78.2 of the detected IXPs in our data are based on multiple diverse evidence and therefore help have higher confidence on the detected IXPs than when relying solely on IXP prefixes. To demonstrate the utility of our tool, we use it to show that one out of five paths in our data cross an IXP and that paths do not normally cross more than a single IXP, as it is expected based on the valley-free model about Internet policies. Furthermore, although the top IXPs both in terms of paths and members are located in Europe, US IXPs attract many more paths than their number of members indicates.
Besides, a group of previous studies, starting with Xu . @cite_14 and then followed by He . @cite_12 and Augustin . @cite_19 , focus on inferring participating ASes and peerings at IXPs from targeted traceroute measurements. Compared to these studies, our goal is different: we build a general-purpose traceroute tool, while they aim at discovering as many peering links as possible. The basic methodology developed in @cite_14 and then significantly extended in @cite_12 and @cite_19 detects IXPs based on assigned IP address prefixes and uses various heuristics to infer peering ASes. The seminal work of Augustin . @cite_19 exploited also data for BGP routers at IXP, but by querying 1.1K BGP Looking Glass servers, which had significant processing cost. In contrast, we extract corresponding data from PDB and PCH, with low processing cost, and show that they are both rich and mostly accurate.
{ "cite_N": [ "@cite_19", "@cite_14", "@cite_12" ], "mid": [ "2295430786", "", "2123649205" ], "abstract": [ "Internet exchange points (IXPs) are an important ingredient of the Internet AS-level ecosystem - a logical fabric of the Internet made up of about 30,000 ASes and their mutual business relationships whose primary purpose is to control and manage the flow of traffic. Despite the IXPs' critical role in this fabric, little is known about them in terms of their peering matrices (i.e., who peers with whom at which IXP) and corresponding traffic matrices (i.e., how much traffic do the different ASes that peer at an IXP exchange with one another). In this paper, we report on an Internet-wide traceroute study that was specifically designed to shed light on the unknown IXP-specific peering matrices and involves targeted traceroutes from publicly available and geographically dispersed vantage points. Based on our method, we were able to discover and validate the existence of about 44K IXP-specific peering links - nearly 18K more links than were previously known. In the process, we also classified all known IXPs depending on the type of information required to detect them. Moreover, in view of the currently used inferred AS-level maps of the Internet that are known to miss a significant portion of the actual AS relationships of the peer-to-peer type, our study provides a new method for augmenting these maps with IXP-related peering links in a systematic and informed manner.", "", "The topology of the Internet at the autonomous system (AS) level is not yet fully discovered despite significant research activity. The community still does not know how many links are missing, where these links are and finally, whether the missing links will change our conceptual model of the Internet topology. An accurate and complete model of the topology would be important for protocol design, performance evaluation and analyses. The goal of our work is to develop methodologies and tools to identify and validate such missing links between ASes. In this work, we develop several methods and identify a significant number of missing links, particularly of the peer-to-peer type. Interestingly, most of the missing AS links that we find exist as peer-to-peer links at the Internet exchange points (IXPs). First, in more detail, we provide a large-scale comprehensive synthesis of the available sources of information. We cross-validate and compare BGP routing tables, Internet routing registries, and traceroute data, while we extract significant new information from the less-studied Internet exchange points (IXPs). We identify 40 more edges and approximately 300 more peer-to-peer edges compared to commonly used data sets. All of these edges have been verified by either BGP tables or traceroute. Second, we identify properties of the new edges and quantify their effects on important topological properties. Given the new peer-to-peer edges, we find that for some ASes more than 50 of their paths stop going through their ISPs assuming policy-aware routing. A surprising observation is that the degree of an AS may be a poor indicator of which ASes it will peer with." ] }
1611.03895
2951371871
Internet eXchange Points (IXP) are critical components of the Internet infrastructure that affect its performance, evolution, security and economics. In this work, we introduce techniques to augment the well-known traceroute tool with the capability of identifying if and where exactly IXPs are crossed in endto- end paths. Knowing this information can help end-users have more transparency over how their traffic flows in the Internet. Our tool, called traIXroute, exploits data from the PeeringDB (PDB) and the Packet Clearing House (PCH) about IXP IP addresses of BGP routers, IXP members, and IXP prefixes. We show that the used data are both rich, i.e., we find 12,716 IP addresses of BGP routers in 460 IXPs, and mostly accurate, i.e., our validation shows 92-93 accuracy. In addition, 78.2 of the detected IXPs in our data are based on multiple diverse evidence and therefore help have higher confidence on the detected IXPs than when relying solely on IXP prefixes. To demonstrate the utility of our tool, we use it to show that one out of five paths in our data cross an IXP and that paths do not normally cross more than a single IXP, as it is expected based on the valley-free model about Internet policies. Furthermore, although the top IXPs both in terms of paths and members are located in Europe, US IXPs attract many more paths than their number of members indicates.
Recently, Giotsas . @cite_10 introduced techniques to identify the physical facility where ASes interconnect using targeted traceroute measurements and a combination of publicly available facility and IXP based information.
{ "cite_N": [ "@cite_10" ], "mid": [ "2530522083" ], "abstract": [ "Annotating Internet interconnections with robust physical coordinates at the level of a building facilitates network management including interdomain troubleshooting, but also has practical value for helping to locate points of attacks, congestion, or instability on the Internet. But, like most other aspects of Internet interconnection, its geophysical locus is generally not public; the facility used for a given link must be inferred to construct a macroscopic map of peering. We develop a methodology, called constrained facility search, to infer the physical interconnection facility where an interconnection occurs among all possible candidates. We rely on publicly available data about the presence of networks at different facilities, and execute traceroute measurements from more than 8,500 available measurement servers scattered around the world to identify the technical approach used to establish an interconnection. A key insight of our method is that inference of the technical approach for an interconnection sufficiently constrains the number of candidate facilities such that it is often possible to identify the specific facility where a given interconnection occurs. Validation via private communication with operators confirms the accuracy of our method, which outperforms heuristics based on naming schemes and IP geolocation. Our study also reveals the multiple roles that routers play at interconnection facilities; in many cases the same router implements both private interconnections and public peerings, in some cases via multiple Internet exchange points. Our study also sheds light on peering engineering strategies used by different types of networks around the globe." ] }
1611.04324
2553665475
We give an overview of new and existing cut- and flow-based ILP formulations for the two-stage stochastic Steiner tree problem and compare the strength of the LP relaxations.
Although the STP allows constant factor approximations the stochastic problems are harder to approximate. @cite_26 showed that the group Steiner tree problem, which is @math -hard to approximate, can be reduced to the stochastic shortest path problem (a special case of the (r)SSTP). Nevertheless, in literature stochastic versions of the STP have been mostly investigated for approximation algorithms. Due to the inapproximability results restricted versions have been considered to obtain approximation algorithms, e.g., by introducing a fixed and or uniform inflation factor or a global terminal (a vertex being a terminal in all scenarios). Moreover, different models of scenario representations are used. Here, we concentrate on the finite polynomial scenario model where the random variables of the stochastic problems are assumed to have finite support. Other publications consider the black box oracle model. For an overview of these concepts see, e.g., @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_26" ], "mid": [ "168538608", "2112058494" ], "abstract": [ "This issue’s column is written by guest columnists, David Shmoys and Chaitanya Swamy. Iam delighted that they agreed to write this timely column on the topic related to stochasticoptimization that has received much attention recently. Their column introduces the readerto several recent results and provides references for further readings.Samir Khuller", "We study two-stage, finite-scenario stochastic versions of several combinatorial optimization problems, and provide nearly tight approximation algorithms for them. Our problems range from the graph-theoretic (shortest path, vertex cover, facility location) to set-theoretic (set cover, bin packing), and contain representatives with different approximation ratios.The approximation ratio of the stochastic variant of a typical problem is found to be of the same order of magnitude as its deterministic counterpart. Furthermore, we show that common techniques for designing approximation algorithms such as LP rounding, the primal-dual method, and the greedy algorithm, can be adapted to obtain these results." ] }
1611.04324
2553665475
We give an overview of new and existing cut- and flow-based ILP formulations for the two-stage stochastic Steiner tree problem and compare the strength of the LP relaxations.
@cite_3 consider the SSTP with @math inflation factors and a global terminal and present a 40-approximation. @cite_29 consider the problem with a uniform fixed inflation factor but without global terminal and describe a constant factor approximation.
{ "cite_N": [ "@cite_29", "@cite_3" ], "mid": [ "2118639405", "2121638441" ], "abstract": [ "We consider the stochastic Steiner forest problem: suppose we were given a collection of Steiner forest instances, and were guaranteed that a random one of these instances would appear tomorrow; moreover, the cost of edges tomorrow will be λ times the cost of edges today. Which edges should we buy today so that we can extend it to a solution for the instance arriving tomorrow, to minimize the expected total cost? While very general results have been developed for many problems in stochastic discrete optimization over the past years, the approximation status of the stochastic Steiner Forest problem has remained open, with previous works yielding constant-factor approximations only for special cases. We resolve the status of this problem by giving a constant-factor primal-dual based approximation algorithm.", "We study the Steiner tree problem and the single-cable single-sink network design problem under a two-stage stochastic model with recourse and finitely many scenarios. In these models, some edges are purchased in a first stage when only probabilistic information about the second stage is available. In the second stage, one of a finite number of specified scenarios is realized, which results in the set of terminals becoming known and the opportunity to purchase additional edges (under an inflated cost function) to augment the first-stage solution. We provide constant factor approximation algorithms for these problems by rounding the linear relaxation of IP formulations of the problems. Our algorithms involve solving the linear relaxation first, followed by a primal-dual routine that is guided by the LP solution. We also show that because our bounds are local (the cost of each component is bounded by its cost in the LP solution), we are able to obtain bounds that guard against a form of downside risk." ] }
1611.04324
2553665475
We give an overview of new and existing cut- and flow-based ILP formulations for the two-stage stochastic Steiner tree problem and compare the strength of the LP relaxations.
For the black box oracle model there exist several approximation algorithms which are based on the idea of scenario sampling. @cite_25 present an @math - -app -ro -xi -mation algorithm for a problem which is restricted by a uniform inflation factor. @cite_4 @cite_27 introduce the concept of boosted sampling and consider the problem with a global terminal and a uniform inflation factor; their approximation algorithm has a ratio of @math . A similar problem is considered by @cite_15 who present a 4-approximation. @cite_33 approximate a problem without global terminal. This problem has a fixed uniform inflation factor and the presented algorithm has a ratio of @math .
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_27", "@cite_15", "@cite_25" ], "mid": [ "1981673685", "1528210381", "2008298964", "168538608", "2002285507" ], "abstract": [ "Several combinatorial optimization problems choose elements to minimize the total cost of constructing a feasible solution that satisfies requirements of clients. In the S teiner T ree problem, for example, edges must be chosen to connect terminals (clients); in V ertex C over , vertices must be chosen to cover edges (clients); in F acility L ocation , facilities must be chosen and demand vertices (clients) connected to these chosen facilities. We consider a stochastic version of such a problem where the solution is constructed in two stages: Before the actual requirements materialize, we can choose elements in a first stage. The actual requirements are then revealed, drawn from a pre-specified probability distribution π thereupon, some more elements may be chosen to obtain a feasible solution for the actual requirements. However, in this second (recourse) stage, choosing an element is costlier by a factor of σ> 1. The goal is to minimize the first stage cost plus the expected second stage cost.We give a general yet simple technique to adapt approximation algorithms for several deterministic problems to their stochastic versions via the following method. First stage: Draw σ independent sets of clients from the distribution π and apply the approximation algorithm to construct a feasible solution for the union of these sets. Second stage: Since the actual requirements have now been revealed, augment the first-stage solution to be feasible for these requirements. We use this framework to derive constant factor approximations for stochastic versions of V ertex C over , S teiner T ree and U ncapacitated F acility L ocation for arbitrary distributions π in one fell swoop. For special (product) distributions, we obtain additional and improved results. Our techniques adapt and use the notion of strict cost-shares introduced in [5].", "This paper considers the Steiner tree problem in the model of two-stage stochastic optimization with recourse. This model, the focus of much recent research [11, 16, 8, 18], tries to capture the fact that many infrastructure planning problems have to be solved in the presence of uncertainty, and that we have make decisions knowing merely market forecasts (and not the precise set of demands); by the time the actual demands arrive, the costs may be higher due to inflation. In the context of the Stochastic Steiner Tree problem on a graph G = (V,E), the model can be paraphrased thus: on Monday, we are given a probability distribution π on subsets of vertices, and can build some subset EM of edges. On Tuesday, a set of terminals D materializes (drawn from the same distribution π). We now have to buy edges ET so that the set EM ∪ ET forms a Steiner tree on D. The goal is to minimize the expected cost of the solution. We give the first constant-factor approximation algorithm for this problem. To the best of our knowledge, this is the first O(1)-approximation for the stochastic version of a non sub-additive problem. In fact, algorithms for the unrooted stochastic Steiner tree problem we consider are powerful enough to solve the Multicommodity Rent-or-Buy problem, itself a topic of recent interest [3, 7, 15].", "We consider two- and multistage versions of stochastic combinatorial optimization problems with recourse: in this framework, the instance for the combinatorial optimization problem is drawn from a known probability distribution @math and is only revealed to the algorithm over two (or multiple) stages. At each stage, on receiving some more information about the instance, the algorithm is allowed to build some partial solution. Since the costs of elements increase with each passing stage, there is a natural tension between waiting for later stages, to gain more information about the instance, and purchasing elements in earlier stages, to take advantages of lower costs. We provide approximation algorithms for stochastic combinatorial optimization problems (such as the Steiner tree problem, the Steiner network problem, and the vertex cover problem) by means of a simple sampling-based algorithm. In every stage, our algorithm samples the probability distribution of the requirements and constructs a partial solution to serve the resulting sample. We show that if one can construct cost-sharing functions associated with the algorithms used to construct these partial solutions, then this strategy results in provable approximation guarantees for the overall stochastic optimization problem. We also extend this approach to provide an approximation algorithm for the stochastic version of the uncapacitated facility location problem, a problem that does not fit into the simpler framework of our main model.", "This issue’s column is written by guest columnists, David Shmoys and Chaitanya Swamy. Iam delighted that they agreed to write this timely column on the topic related to stochasticoptimization that has received much attention recently. Their column introduces the readerto several recent results and provides references for further readings.Samir Khuller", "Combinatorial optimization is often used to \"plan ahead,\" purchasing and allocating resources for demands that are not precisely known at the time of solution. This advance planning may be done because resources become very expensive to purchase or difficult to allocate at the last minute when the demands are known. In this work we study the tradeoffs involved in making some purchase allocation decisions early to reduce cost while deferring others at greater expense to take advantage of additional, late-arriving information. We consider a number of combinatorial optimization problems in which the problem instance is uncertain---modeled by a probability distribution---and in which solution elements can be purchased cheaply now or at greater expense after the distribution is sampled. We show how to approximately optimize the choice of what to purchase in advance and what to defer." ] }
1611.04324
2553665475
We give an overview of new and existing cut- and flow-based ILP formulations for the two-stage stochastic Steiner tree problem and compare the strength of the LP relaxations.
Last but not least, fixed parameter tractable algorithms are described for the stochastic problems with parameter overall number of terminals @cite_16 and on partial 2-trees with parameter number of scenarios @cite_20 .
{ "cite_N": [ "@cite_16", "@cite_20" ], "mid": [ "75217549", "137798462" ], "abstract": [ "We consider the Steiner tree problem in graphs under uncertainty, the so-called two-stage stochastic Steiner tree problem (SSTP). The problem consists of two stages: In the first stage, we do not know which nodes need to be connected. Instead, we know costs at which we may buy edges, and a set of possible scenarios one of which will arise in the second stage. Each scenario consists of its own terminal set, a probability, and second-stage edge costs. We want to find a selection of first-stage edges and second-stage edges for each scenario that minimizes the expected costs and satisfies all connectivity requirements. We show that SSTP is in the class of fixed-parameter tractable problems (FPT), parameterized by the number of terminals. Additionally, we transfer our results to the directed and the prize-collecting variant of SSTP.", "Given an undirected graph G = (V, E) and a node set (T V ), a Steiner tree for T in G is a set of edges (S E ) such that the graph (V(S), S) contains a path between every pair of nodes in T, where V(S) is the set of nodes incident to the edges in S. Given costs (or weights) on edges and nodes, the Steiner tree problem on a graph (STP) is to find a minimum weight Steiner tree. The problem is known to be ( N P )-hard even for planar graphs, bipartite graphs, and grid graphs." ] }
1611.04244
2579653291
We present two novel and contrasting Recurrent Neural Network (RNN) based architectures for extractive summarization of documents. The Classifier based architecture sequentially accepts or rejects each sentence in the original document order for its membership in the summary. The Selector architecture, on the other hand, is free to pick one sentence at a time in any arbitrary order to generate the extractive summary. @PARASPLIT Our models under both architectures jointly capture the notions of salience and redundancy of sentences. In addition, these models have the advantage of being very interpretable, since they allow visualization of their predictions broken up by abstract features such as information content, salience and redundancy. @PARASPLIT We show that our models reach or outperform state-of-the-art supervised models on two different corpora. We also recommend the conditions under which one architecture is superior to the other based on experimental evidence.
In the deep learning framework, the extractive summarization work of @cite_10 is the closest to our work. Their model is based on an encoder-decoder approach where the encoder learns the representation of sentences and documents while the decoder classifies each sentence using an attention mechanism. Broadly, their model is also in the Classifier framework, but architecturally, our approaches are different. While their approach can be termed as a multi-pass approach where both the encoder and decoder consume the same sentence representations, our approach is a deep one where the representations learned by the bidirectional GRU encoder are in turn consumed by the Classifier or Selector models. Another key difference between our work and theirs is that unlike our unsupervised greedy approach to convert abstractive summaries to extractive labels, @cite_10 chose to train a separate supervised classifier using manually created labels on a subset of the data. This may yield more accurate gold extractive labels which may help boost the performance of their models, but incurs additional annotation costs.
{ "cite_N": [ "@cite_10" ], "mid": [ "2307381258" ], "abstract": [ "Traditional approaches to extractive summarization rely heavily on humanengineered features. In this work we propose a data-driven approach based on neural networks and continuous sentence features. We develop a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor. This architecture allows us to develop different classes of summarization models which can extract sentences or words. We train our models on large scale corpora containing hundreds of thousands of document-summary pairs 1 . Experimental results on two summarization datasets demonstrate that our models obtain results comparable to the state of the art without any access to linguistic annotation." ] }
1611.04067
2571384964
Spectral dimensionality reduction is frequently used to identify low-dimensional structure in high-dimensional data. However, learning manifolds, especially from the streaming data, is computationally and memory expensive. In this paper, we argue that a stable manifold can be learned using only a fraction of the stream, and the remaining stream can be mapped to the manifold in a significantly less costly manner. Identifying the transition point at which the manifold is stable is the key step. We present error metrics that allow us to identify the transition point for a given stream by quantitatively assessing the quality of a manifold learned using Isomap. We further propose an efficient mapping algorithm, called S-Isomap, that can be used to map new samples onto the stable manifold. We describe experiments on a variety of data sets that show that the proposed approach is computationally efficient without sacrificing accuracy.
Given the high computational complexity of Isomap, variants of Isomap, such as Landmark Isomap @cite_17 and out-of-sample extension techniques @cite_1 , have been proposed as a computationally viable alternative. Both of these methods either use a smaller set of landmark points or approximations to avoid performing the costly eigen decomposition on the @math geodesic distance matrix, where @math is the number of points in the entire data set. However, they still require computing the full geodesic distance matrix which is @math , where @math is the diameter of the embedded @math NN graph. The Incremental Isomap algorithm @cite_10 avoids both eigen decomposition and a recreation of the geodesic distance matrix. However, it requires updates to the geodesic distance matrix that incurs a significant cost, as discussed in . Consequently, the method is unsuitable for the streaming setting .
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_17" ], "mid": [ "2156761667", "2153934661", "2156287497" ], "abstract": [ "Understanding the structure of multidimensional patterns, especially in unsupervised cases, is of fundamental importance in data mining, pattern recognition, and machine learning. Several algorithms have been proposed to analyze the structure of high-dimensional data based on the notion of manifold learning. These algorithms have been used to extract the intrinsic characteristics of different types of high-dimensional data by performing nonlinear dimensionality reduction. Most of these algorithms operate in a \"batch\" mode and cannot be efficiently applied when data are collected sequentially. In this paper, we describe an incremental version of ISOMAP, one of the key manifold learning algorithms. Our experiments on synthetic data as well as real world images demonstrate that our modified algorithm can maintain an accurate low-dimensional representation of the data in an efficient manner.", "Several unsupervised learning algorithms based on an eigendecomposition provide either an embedding or a clustering only for given training points, with no straightforward extension for out-of-sample examples short of recomputing eigenvectors. This paper provides a unified framework for extending Local Linear Embedding (LLE), Isomap, Laplacian Eigenmaps, Multi-Dimensional Scaling (for dimensionality reduction) as well as for Spectral Clustering. This framework is based on seeing these algorithms as learning eigenfunctions of a data-dependent kernel. Numerical experiments show that the generalizations performed have a level of error comparable to the variability of the embedding algorithms due to the choice of training data.", "Recently proposed algorithms for nonlinear dimensionality reduction fall broadly into two categories which have different advantages and disadvantages: global (Isomap [1]), and local (Locally Linear Embedding [2], Laplacian Eigenmaps [3]). We present two variants of Isomap which combine the advantages of the global approach with what have previously been exclusive advantages of local methods: computational sparsity and the ability to invert conformal maps." ] }
1611.04067
2571384964
Spectral dimensionality reduction is frequently used to identify low-dimensional structure in high-dimensional data. However, learning manifolds, especially from the streaming data, is computationally and memory expensive. In this paper, we argue that a stable manifold can be learned using only a fraction of the stream, and the remaining stream can be mapped to the manifold in a significantly less costly manner. Identifying the transition point at which the manifold is stable is the key step. We present error metrics that allow us to identify the transition point for a given stream by quantitatively assessing the quality of a manifold learned using Isomap. We further propose an efficient mapping algorithm, called S-Isomap, that can be used to map new samples onto the stable manifold. We describe experiments on a variety of data sets that show that the proposed approach is computationally efficient without sacrificing accuracy.
Errors in Isomap have been discussed in prior work @cite_12 , but those studies have been typically in regards to selection of parameters. For example, @cite_2 proposed measuring a simple manifold embedding error for a range of @math to find the best choice of @math . Similarly, an approach based on the @math -edge disjoint minimal spanning tree algorithm has been proposed to construct a neighborhood graph with connectivity guarantees @cite_9 . In the same spirit, several strategies to assess intrinsic manifold dimension @math are available @cite_12 . However, to the best of our knowledge there is no prior work in defining and understanding the behavior of error that persists even with the selection of optimal parameters. We address this error in taking an abstract view of Isomap, providing a protocol for measuring collective error, and understanding its behavior. In doing so we identify the optimal point where we may switch from exact to lightweight methods.
{ "cite_N": [ "@cite_9", "@cite_12", "@cite_2" ], "mid": [ "2132192618", "2176206423", "2055292465" ], "abstract": [ "Isometric data embedding requires construction of a neighborhood graph that spans all data points so that geodesic distance between any pair of data points could be estimated by distance along the shortest path between the pair on the graph. This paper presents an approach for constructing k-edge-connected neighborhood graphs. It works by finding k edge-disjoint spanning trees the sum of whose total lengths is a minimum. Experiments show that it outperforms the nearest neighbor approach for geodesic distance estimation.", "", "The isometric feature mapping (Isomap) method has demonstrated promising results in finding low-dimensional manifolds from data points in high-dimensional input space. Isomap has one free parameter (number of nearest neighbours K or neighbourhood radius @e), which has to be specified manually. In this paper we present a new method for selecting the optimal parameter value for Isomap automatically. Numerous experiments on synthetic and real data sets show the effectiveness of our method." ] }
1611.03954
2949888133
Many recent works have demonstrated the benefits of knowledge graph embeddings in completing monolingual knowledge graphs. Inasmuch as related knowledge bases are built in several different languages, achieving cross-lingual knowledge alignment will help people in constructing a coherent knowledge base, and assist machines in dealing with different expressions of entity relationships across diverse human languages. Unfortunately, achieving this highly desirable crosslingual alignment by human labor is very costly and errorprone. Thus, we propose MTransE, a translation-based model for multilingual knowledge graph embeddings, to provide a simple and automated solution. By encoding entities and relations of each language in a separated embedding space, MTransE provides transitions for each embedding vector to its cross-lingual counterparts in other spaces, while preserving the functionalities of monolingual embeddings. We deploy three different techniques to represent cross-lingual transitions, namely axis calibration, translation vectors, and linear transformations, and derive five variants for MTransE using different loss functions. Our models can be trained on partially aligned graphs, where just a small portion of triples are aligned with their cross-lingual counterparts. The experiments on cross-lingual entity matching and triple-wise alignment verification show promising results, with some variants consistently outperforming others on different tasks. We also explore how MTransE preserves the key properties of its monolingual counterpart TransE.
Knowledge Graph Embeddings. Recently, significant advancement has been made in using the translation-based method to train monolingual knowledge graph embeddings. To characterize a triple @math , models of this family follow a common assumption @math , where @math and @math are either the original vectors of @math and @math , or the transformed vectors under a certain transformation w.r.t. relation @math . The forerunner TransE @cite_32 sets @math and @math as the original @math and @math , and achieves promising results in handling 1-to-1 relations. Later works improve TransE on multi-mapping relations by introducing relation-specific transformations on entities to obtain different @math and @math , including projections on relation-specific hyperplanes in TransH @cite_7 , linear transformations to heterogeneous relation spaces in TransR @cite_27 , dynamic matrices in TransD @cite_28 , and other forms @cite_8 @cite_12 . All these variants of TransE specialize entity embeddings for different relations, therefore improving knowledge graph completion on multi-mapping relations at the cost of increased model complexity. Meanwhile translation-based models cooperate well with other models. For example, variants of TransE are combined with word embeddings to help relation extraction from text @cite_25 @cite_21 .
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_28", "@cite_21", "@cite_32", "@cite_27", "@cite_25", "@cite_12" ], "mid": [ "2283196293", "2268966618", "2250342289", "2250807343", "2127795553", "2184957013", "1792926363", "2463781041" ], "abstract": [ "We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.", "Knowledge graph embedding aims to represent entities and relations in a large-scale knowledge graph as elements in a continuous vector space. Existing methods, e.g., TransE and TransH, learn embedding representation by defining a global margin-based loss function over the data. However, the optimal loss function is determined during experiments whose parameters are examined among a closed set of candidates. Moreover, embeddings over two knowledge graphs with different entities and relations share the same set of candidate loss functions, ignoring the locality of both graphs. This leads to the limited performance of embedding related applications. In this paper, we propose a locally adaptive translation method for knowledge graph embedding, called TransA, to find the optimal loss function by adaptively determining its margin over different knowledge graphs. Experiments on two benchmark data sets demonstrate the superiority of the proposed method, as compared to the-state-of-the-art ones.", "Knowledge graphs are useful resources for numerous AI applications, but they are far from completeness. Previous work such as TransE, TransH and TransR CTransR regard a relation as translation from head entity to tail entity and the CTransR achieves state-of-the-art performance. In this paper, we propose a more fine-grained model named TransD, which is an improvement of TransR CTransR. In TransD, we use two vectors to represent a named symbol object (entity and relation). The first one represents the meaning of a(n) entity (relation), the other one is used to construct mapping matrix dynamically. Compared with TransR CTransR, TransD not only considers the diversity of relations, but also entities. TransD has less parameters and has no matrix-vector multiplication operations, which makes it can be applied on large scale graphs. In Experiments, we evaluate our model on two typical tasks including triplets classification and link prediction. Evaluation results show that our approach outperforms stateof-the-art methods.", "We study the problem of jointly embedding a knowledge base and a text corpus. The key issue is the alignment model making sure the vectors of entities, relations and words are in the same space. (2014a) rely on Wikipedia anchors, making the applicable scope quite limited. In this paper we propose a new alignment model based on text descriptions of entities, without dependency on anchors. We require the embedding vector of an entity not only to fit the structured constraints in KBs but also to be equal to the embedding vector computed from the text description. Extensive experiments show that, the proposed approach consistently performs comparably or even better than the method of (2014a), which is encouraging as we do not use any anchor information.", "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.", "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.", "This paper proposes a novel approach for relation extraction from free text which is trained to jointly use information from the text and from existing knowledge. Our model is based on scoring functions that operate by learning low-dimensional embeddings of words, entities and relationships from a knowledge base. We empirically show on New York Times articles aligned with Freebase relations that our approach is able to efficiently use the extra information provided by a large subset of Freebase data (4M entities, 23k relationships) to improve over methods that rely on text features alone.", "Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction, i.e., predict whether a relationship not in the knowledge base is likely to be true. This paper combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a lowdimensional vector, and each relation by two matrices and a translation vector. STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models. Thus, STransE can serve as a new baseline for the more complex models in the link prediction task." ] }
1611.03954
2949888133
Many recent works have demonstrated the benefits of knowledge graph embeddings in completing monolingual knowledge graphs. Inasmuch as related knowledge bases are built in several different languages, achieving cross-lingual knowledge alignment will help people in constructing a coherent knowledge base, and assist machines in dealing with different expressions of entity relationships across diverse human languages. Unfortunately, achieving this highly desirable crosslingual alignment by human labor is very costly and errorprone. Thus, we propose MTransE, a translation-based model for multilingual knowledge graph embeddings, to provide a simple and automated solution. By encoding entities and relations of each language in a separated embedding space, MTransE provides transitions for each embedding vector to its cross-lingual counterparts in other spaces, while preserving the functionalities of monolingual embeddings. We deploy three different techniques to represent cross-lingual transitions, namely axis calibration, translation vectors, and linear transformations, and derive five variants for MTransE using different loss functions. Our models can be trained on partially aligned graphs, where just a small portion of triples are aligned with their cross-lingual counterparts. The experiments on cross-lingual entity matching and triple-wise alignment verification show promising results, with some variants consistently outperforming others on different tasks. We also explore how MTransE preserves the key properties of its monolingual counterpart TransE.
In addition to these , there are non-translation-based methods. Some of those including UM @cite_15 , SE @cite_20 , Bilinear @cite_9 , and HolE @cite_26 , do not explicitly represent relation embeddings. Others including neural-based models SLM @cite_22 and NTN @cite_13 , and random-walk-based model TADW @cite_4 , are expressive and adaptable for both structured and text corpora, but are too complex to be incorporated into an architecture supporting multilingual knowledge.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_9", "@cite_15", "@cite_13", "@cite_20" ], "mid": [ "2949972983", "2242161203", "2117130368", "2101802482", "2156954687", "2127426251", "1596986901" ], "abstract": [ "Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HolE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator HolE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. In extensive experiments we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction in knowledge graphs and relational learning benchmark datasets.", "Representation learning has shown its effectiveness in many tasks such as image classification and text mining. Network representation learning aims at learning distributed vector representation for each vertex in a network, which is also increasingly recognized as an important aspect for network analysis. Most network representation learning methods investigate network structures for learning. In reality, network vertices contain rich information (such as text), which cannot be well applied with algorithmic frameworks of typical representation learning methods. By proving that DeepWalk, a state-of-the-art network representation method, is actually equivalent to matrix factorization (MF), we propose text-associated DeepWalk (TADW). TADW incorporates text features of vertices into network representation learning under the framework of matrix factorization. We evaluate our method and various baseline methods by applying them to the task of multi-class classification of vertices. The experimental results show that, our method outperforms other baselines on all three datasets, especially when networks are noisy and training ratio is small. The source code of this paper can be obtained from https: github.com albertyang33 TADW.", "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.", "Many data such as social networks, movie preferences or knowledge bases are multi-relational, in that they describe multiple relations between entities. While there is a large body of work focused on modeling these data, modeling these multiple types of relations jointly remains challenging. Further, existing approaches tend to breakdown when the number of these types grows. In this paper, we propose a method for modeling large multi-relational datasets, with possibly thousands of relations. Our model is based on a bilinear structure, which captures various orders of interaction of the data, and also shares sparse latent factors across different relations. We illustrate the performance of our approach on standard tensor-factorization datasets where we attain, or outperform, state-of-the-art results. Finally, a NLP application demonstrates our scalability and the ability of our model to learn efficient and semantically meaningful verb representations.", "", "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively.", "Open-text semantic parsers are designed to interpret any statement in natural language by inferring a corresponding meaning representation (MR – a formal representation of its sense). Unfortunately, large scale systems cannot be easily machine-learned due to a lack of directly supervised data. We propose a method that learns to assign MRs to a wide range of text (using a dictionary of more than 70,000 words mapped to more than 40,000 entities) thanks to a training scheme that combines learning from knowledge bases (e.g. WordNet) with learning from raw text. The model jointly learns representations of words, entities and MRs via a multi-task training process operating on these diverse sources of data. Hence, the system ends up providing methods for knowledge acquisition and wordsense disambiguation within the context of semantic parsing in a single elegant framework. Experiments on these various tasks indicate the promise of the approach." ] }
1611.03954
2949888133
Many recent works have demonstrated the benefits of knowledge graph embeddings in completing monolingual knowledge graphs. Inasmuch as related knowledge bases are built in several different languages, achieving cross-lingual knowledge alignment will help people in constructing a coherent knowledge base, and assist machines in dealing with different expressions of entity relationships across diverse human languages. Unfortunately, achieving this highly desirable crosslingual alignment by human labor is very costly and errorprone. Thus, we propose MTransE, a translation-based model for multilingual knowledge graph embeddings, to provide a simple and automated solution. By encoding entities and relations of each language in a separated embedding space, MTransE provides transitions for each embedding vector to its cross-lingual counterparts in other spaces, while preserving the functionalities of monolingual embeddings. We deploy three different techniques to represent cross-lingual transitions, namely axis calibration, translation vectors, and linear transformations, and derive five variants for MTransE using different loss functions. Our models can be trained on partially aligned graphs, where just a small portion of triples are aligned with their cross-lingual counterparts. The experiments on cross-lingual entity matching and triple-wise alignment verification show promising results, with some variants consistently outperforming others on different tasks. We also explore how MTransE preserves the key properties of its monolingual counterpart TransE.
Multilingual Word Embeddings. Several approaches learn multilingual word embeddings on parallel text corpora. Some of those can be extended to multilingual knowledge graphs, such as LM @cite_10 and CCA @cite_18 which induce offline transitions among pre-trained monolingual embeddings in forms of linear transformations and canonical component analysis respectively. These approaches do not adjust the inconsistent vector spaces via calibration or jointly training with the alignment model, thus fail to perform well on knowledge graphs as the parallelism exists only in small portions. A better approach OT @cite_1 jointly learns regularized embeddings and orthogonal transformations, which is however found to be overcomplicated due to the inconsistency of monolingual vector spaces and the large diversity of relations among entities.
{ "cite_N": [ "@cite_1", "@cite_18", "@cite_10" ], "mid": [ "2294774419", "342285082", "2126725946" ], "abstract": [ "Word embedding has been found to be highly powerful to translate words from one language to another by a simple linear transform. However, we found some inconsistence among the objective functions of the embedding and the transform learning, as well as the distance measurement. This paper proposes a solution which normalizes the word vectors on a hypersphere and constrains the linear transform as an orthogonal transform. The experimental results confirmed that the proposed solution can offer better performance on a word similarity task and an English-toSpanish word translation task.", "The distributional hypothesis of Harris (1954), according to which the meaning of words is evidenced by the contexts they occur in, has motivated several effective techniques for obtaining vector space semantic representations of words using unannotated text corpora. This paper argues that lexico-semantic content should additionally be invariant across languages and proposes a simple technique based on canonical correlation analysis (CCA) for incorporating multilingual evidence into vectors generated monolingually. We evaluate the resulting word representations on standard lexical semantic evaluation tasks and show that our method produces substantially better semantic representations than monolingual techniques.", "Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90 precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs." ] }
1611.03954
2949888133
Many recent works have demonstrated the benefits of knowledge graph embeddings in completing monolingual knowledge graphs. Inasmuch as related knowledge bases are built in several different languages, achieving cross-lingual knowledge alignment will help people in constructing a coherent knowledge base, and assist machines in dealing with different expressions of entity relationships across diverse human languages. Unfortunately, achieving this highly desirable crosslingual alignment by human labor is very costly and errorprone. Thus, we propose MTransE, a translation-based model for multilingual knowledge graph embeddings, to provide a simple and automated solution. By encoding entities and relations of each language in a separated embedding space, MTransE provides transitions for each embedding vector to its cross-lingual counterparts in other spaces, while preserving the functionalities of monolingual embeddings. We deploy three different techniques to represent cross-lingual transitions, namely axis calibration, translation vectors, and linear transformations, and derive five variants for MTransE using different loss functions. Our models can be trained on partially aligned graphs, where just a small portion of triples are aligned with their cross-lingual counterparts. The experiments on cross-lingual entity matching and triple-wise alignment verification show promising results, with some variants consistently outperforming others on different tasks. We also explore how MTransE preserves the key properties of its monolingual counterpart TransE.
Knowledge Bases Alignment. Some projects produce cross-lingual alignment in knowledge bases at the cost of extensive human involvement and designing hand-crafted features dedicated to specific applications. Wikidata @cite_16 and DBpedia @cite_11 rely on crowdsourcing to create ILLs and relation alignment. YAGO @cite_17 mines association rules on known matches, which combines many confident scores and requires extensively fine tuning. Many other works require sources that are external to the graphs, from well-established schemata or ontologies @cite_2 @cite_30 @cite_6 to entity descriptions @cite_29 , which being unavailable to many knowledge bases such as YAGO, WordNet, and ConceptNet @cite_33 . Such approaches also involve complicated model dependencies that are not tractable and reusable. By contrast, embedding-based methods are simple and general, require little human involvement, and generate task-independent features that can contribute to other NLP tasks.
{ "cite_N": [ "@cite_30", "@cite_11", "@cite_33", "@cite_29", "@cite_6", "@cite_2", "@cite_16", "@cite_17" ], "mid": [ "1597082186", "1552847225", "13682356", "2079659743", "2157311147", "1981565830", "2080133951", "804133461" ], "abstract": [ "One of the main challenges that the Semantic Web faces is the integration of a growing number of independently designed ontologies. In this work, we present paris, an approach for the automatic alignment of ontologies. paris aligns not only instances, but also relations and classes. Alignments at the instance level cross-fertilize with alignments at the schema level. Thereby, our system provides a truly holistic solution to the problem of ontology alignment. The heart of the approach is probabilistic, i.e., we measure degrees of matchings based on probability estimates. This allows paris to run without any parameter tuning. We demonstrate the efficiency of the algorithm and its precision through extensive experiments. In particular, we obtain a precision of around 90 in experiments with some of the world's largest ontologies.", "The DBpedia community project extracts structured, multilingual knowledge from Wikipedia and makes it freely available on the Web using Semantic Web and Linked Data technologies. The project extracts knowledge from 111 different language editions of Wikipedia. The largest DBpedia knowledge base which is extracted from the English edition of Wikipedia consists of over 400 million facts that describe 3.7 million things. The DBpedia knowledge bases that are extracted from the other 110 Wikipedia editions together consist of 1.46 billion facts and describe 10 million additional things. The DBpedia project maps Wikipedia infoboxes from 27 different language editions to a single shared ontology consisting of 320 classes and 1,650 properties. The mappings are created via a world-wide crowd-sourcing effort and enable knowledge from the different Wikipedia editions to be combined. The project publishes releases of all DBpedia knowledge bases for download and provides SPARQL query access to 14 out of the 111 language editions via a global network of local DBpedia chapters. In addition to the regular releases, the project maintains a live knowledge base which is updated whenever a page in Wikipedia changes. DBpedia sets 27 million RDF links pointing into over 30 external data sources and thus enables data from these sources to be used together with DBpedia data. Several hundred data sets on the Web publish RDF links pointing to DBpedia themselves and make DBpedia one of the central interlinking hubs in the Linked Open Data (LOD) cloud. In this system report, we give an overview of the DBpedia community project, including its architecture, technical implementation, maintenance, internationalisation, usage statistics and applications.", "ConceptNet is a knowledge representation project, providing a large semantic graph that describes general human knowledge and how it is expressed in natural language. Here we present the latest iteration, ConceptNet 5, with a focus on its fundamental design decisions and ways to interoperate with it.", "Given an entity in a source domain, finding its matched entities from another (target) domain is an important task in many applications. Traditionally, the problem was usually addressed by first extracting major keywords corresponding to the source entity and then query relevant entities from the target domain using those keywords. However, the method would inevitably fails if the two domains have less or no overlapping in the content. An extreme case is that the source domain is in English and the target domain is in Chinese. In this paper, we formalize the problem as entity matching across heterogeneous sources and propose a probabilistic topic model to solve the problem. The model integrates the topic extraction and entity matching, two core subtasks for dealing with the problem, into a unified model. Specifically, for handling the text disjointing problem, we use a cross-sampling process in our model to extract topics with terms coming from all the sources, and leverage existing matching relations through latent topic layers instead of at text layers. Benefit from the proposed model, we can not only find the matched documents for a query entity, but also explain why these documents are related by showing the common topics they share. Our experiments in two real-world applications show that the proposed model can extensively improve the matching performance (+19.8 and +7.1 in two applications respectively) compared with several alternative methods.", "Wikipedia has grown to a huge, multi-lingual source of encyclopedic knowledge. Apart from textual content, a large and ever-increasing number of articles feature so-called infoboxes, which provide factual information about the articles' subjects. As the different language versions evolve independently, they provide different information on the same topics. Correspondences between infobox attributes in different language editions can be leveraged for several use cases, such as automatic detection and resolution of inconsistencies in infobox data across language versions, or the automatic augmentation of infoboxes in one language with data from other language versions. We present an instance-based schema matching technique that exploits information overlap in infoboxes across different language editions. As a prerequisite we present a graph-based approach to identify articles in different languages representing the same real-world entity using (and correcting) the interlanguage links in Wikipedia. To account for the untyped nature of infobox schemas, we present a robust similarity measure that can reliably quantify the similarity of strings with mixed types of data. The qualitative evaluation on the basis of manually labeled attribute correspondences between infoboxes in four of the largest Wikipedia editions demonstrates the effectiveness of the proposed approach.", "Recent research has taken advantage of Wikipedia's multi-lingualism as a resource for cross-language information retrieval and machine translation, as well as proposed techniques for enriching its cross-language structure. The availability of documents in multiple languages also opens up new opportunities for querying structured Wikipedia content, and in particular, to enable answers that straddle different languages. As a step towards supporting such queries, in this paper, we propose a method for identifying mappings between attributes from infoboxes that come from pages in different languages. Our approach finds mappings in a completely automated fashion. Because it does not require training data, it is scalable: not only can it be used to find mappings between many language pairs, but it is also effective for languages that are under-represented and lack sufficient training samples. Another important benefit of our approach is that it does not depend on syntactic similarity between attribute names, and thus, it can be applied to language pairs that have distinct morphologies. We have performed an extensive experimental evaluation using a corpus consisting of pages in Portuguese, Vietnamese, and English. The results show that not only does our approach obtain high precision and recall, but it also outperforms state-of-the-art techniques. We also present a case study which demonstrates that the multilingual mappings we derive lead to substantial improvements in answer quality and coverage for structured queries over Wikipedia content.", "This collaboratively edited knowledgebase provides a common source of data for Wikipedia, and everyone else.", "We present YAGO3, an extension of the YAGO knowledge base that combines the information from the Wikipedias in multiple languages. Our technique fuses the multilingual information with the English WordNet to build one coherent knowledge base. We make use of the categories, the infoboxes, and Wikidata, and learn the meaning of infobox attributes across languages. We run our method on 10 different languages, and achieve a precision of 95 -100 in the attribute mapping. Our technique enlarges YAGO by 1m new entities and 7m new facts." ] }
1611.03901
2568954714
Given any @math and for @math denoting a sample of the two-dimensional discrete Gaussian free field on @math pinned at the origin, we consider the random walk on @math among random conductances where the conductance of edge @math is given by @math . We show that, for almost every @math , this random walk is recurrent and that, with probability tending to 1 as @math , the return probability at time @math decays as @math . In addition, we prove a version of subdiffusive behavior by showing that the expected exit time from a ball of radius @math scales as @math with @math for all @math . Our results rely on delicate control of the effective resistance for this random network. In particular, we show that the effective resistance between two vertices at Euclidean distance @math behaves as @math .
A random walk naturally associated with LBM is the continuous time simple symmetric random walk with exponential holding time at @math having parameter @math where, in our notation, @math . A more natural (albeit qualitatively similar, as far as long-time behavior is concerned) modification is to use @math (see ) in instead of @math ; we will refer to the associated process as the Liouville Random Walk (LRW) below. Formally, this process is a continuous-time Markov chain on @math with generator The nature of the transition rates of the LRW precludes formulation using conductances and, no surprise, our analysis is thus quite different from those mentioned above. For instance, unlike for the LRW, our random walk moves preferably towards neighbors with a higher potential, emphasizing the trapping effects of the random environment; see Fig. . The off-diagonal heat kernel computation in @cite_24 is also of a different flavor: Our control of the return probability relies crucially on the electric-resistance metric while the off-diagonal LBM heat kernel is expected to be related to the Liouville first passage (Liouville FPP) percolation metric (see @cite_36 @cite_32 ).
{ "cite_N": [ "@cite_24", "@cite_32", "@cite_36" ], "mid": [ "1649296927", "", "1600293573" ], "abstract": [ "Dans ce papier, nous initions l’etude des proprietes analytiques du noyau de la chaleur de Liouville. En particulier, nous etablissons des estimees de regularite pour le noyau et nous l’encadrons par des bornes inferieures et superieures non triviales.", "", "This book is an introduction to the modern approach to the theory of Markov chains. The main goal of this approach is to determine the rate of convergence of a Markov chain to the stationary distribution as a function of the size and geometry of the state space. The authors develop the key tools for estimating convergence times, including coupling, strong stationary times, and spectral methods. Whenever possible, probabilistic methods are emphasized. The book includes many examples and provides brief introductions to some central models of statistical mechanics. Also provided are accounts of random walks on networks, including hitting and cover times, and analyses of several methods of shuffling cards. As a prerequisite, the authors assume a modest understanding of probability theory and linear algebra at an undergraduate level. \"\"Markov Chains and Mixing Times\"\" is meant to bring the excitement of this active area of research to a wide audience." ] }
1611.03901
2568954714
Given any @math and for @math denoting a sample of the two-dimensional discrete Gaussian free field on @math pinned at the origin, we consider the random walk on @math among random conductances where the conductance of edge @math is given by @math . We show that, for almost every @math , this random walk is recurrent and that, with probability tending to 1 as @math , the return probability at time @math decays as @math . In addition, we prove a version of subdiffusive behavior by showing that the expected exit time from a ball of radius @math scales as @math with @math for all @math . Our results rely on delicate control of the effective resistance for this random network. In particular, we show that the effective resistance between two vertices at Euclidean distance @math behaves as @math .
Another series of related works is on random walks on random planar maps. This is thanks to the conjectural relation between LQG and random planar maps (note that part of the conjecture has been established in @cite_44 @cite_4 ). Building on ideas from the theory of circle packings @cite_7 , the authors of @cite_49 proved that the uniform infinite planar triangulation and quadrangulation are both almost surely recurrent. In @cite_45 , it was shown that the random walk on the uniform infinite planar quadrangulation is sub-diffusive, where an upper bound of @math on the exponent was given while the conjectured exponent is @math .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_44", "@cite_45", "@cite_49" ], "mid": [ "", "2110596924", "2106942082", "2150887479", "2963558300" ], "abstract": [ "", "Suppose that @math is a sequence of finite connected planar graphs, and in each @math a special vertex, called the root, is chosen randomly-uniformly. We introduce the notion of a distributional limit @math of such graphs. Assume that the vertex degrees of the vertices in @math are bounded, and the bound does not depend on @math . Then after passing to a subsequence, the limit exists, and is a random rooted graph @math . We prove that with probability one @math is recurrent. The proof involves the Circle Packing Theorem. The motivation for this work comes from the theory of random spherical triangulations.", "We construct the natural diffusion in the random geometry of planar Liouville quantum gravity. Formally, this is the Brownian motion in a domain D of the complex plane for which the Riemannian metric tensor at a point z ∈ D is given by exp(h(z)), appropriately normalised. Here h is an instance of the Gaussian Free Field on D and ∈ (0,2) is a parameter. We show that the process is almost surely continuous and enjoys certain conformal invariance properties. We also estimate the Hausdorff dimension of times that the diffusion spends in the thick points of the Gaussian Free Field, and show that it spends Lebesgue-almost all its time in the set of -thick points, almost surely. Similar but deeper results have been independently and simultaneously proved by Garban, Rhodes and Vargas.", "We study the pioneer points of the simple random walk on the uniform infinite planar quadrangulation (UIPQ) using an adaptation of the peeling procedure of Angel (Geom Funct Anal 13:935–974, 2003) to the quadrangulation case. Our main result is that, up to polylogarithmic factors, n 3 pioneer points have been discovered before the walk exits the ball of radius n in the UIPQ. As a result we verify the KPZ relation (Modern Phys Lett A 3:819–826, 1988) in the particular case of the pioneer exponent and prove that the walk is subdiffusive with exponent less than 1 3. Along the way, new geometric controls on the UIPQ are established.", "We prove that any distributional limit of nite planar graphs in which the degree of the root has an exponential tail is almost surely recurrent. As a corollary, we obtain that the uniform innite planar triangulation and quadrangulation (UIPT and UIPQ) are almost surely recurrent, resolving a conjecture of Angel, Benjamini and Schramm. We also settle another related problem of Benjamini and Schramm. We show that in any bounded degree, nite planar graph the probability that the simple random walk started at a uniform random vertex avoids its initial location for T steps is at most C logT ." ] }
1611.04088
2950090389
Gaussian Process bandit optimization has emerged as a powerful tool for optimizing noisy black box functions. One example in machine learning is hyper-parameter optimization where each evaluation of the target function requires training a model which may involve days or even weeks of computation. Most methods for this so-called "Bayesian optimization" only allow sequential exploration of the parameter space. However, it is often desirable to propose batches or sets of parameter values to explore simultaneously, especially when there are large parallel processing facilities at our disposal. Batch methods require modeling the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. In this paper, we propose a new approach for parallelizing Bayesian optimization by modeling the diversity of a batch via Determinantal point processes (DPPs) whose kernels are learned automatically. This allows us to generalize a previous result as well as prove better regret bounds based on DPP sampling. Our experiments on a variety of synthetic and real-world robotics and hyper-parameter optimization tasks indicate that our DPP-based methods, especially those based on DPP sampling, outperform state-of-the-art methods.
One of the key tasks involved in black box optimization is of choosing actions that both explore the function and exploit our knowledge about likely high reward regions in the function's domain. This exploration-exploitation trade-off becomes especially important when the function is expensive to evaluate. This exploration-exploitation trade off naturally leads to modeling this problem in the multi-armed bandit paradigm @cite_1 , where the goal is to maximize cumulative reward by optimally balancing this trade-off. Srinivas . @cite_13 analyzed the Gaussian Process Upper Confidence Bound (GP-UCB) algorithm, a simple and intuitive Bayesian method @cite_33 to achieve the first sub-linear regret bounds for Gaussian process bandit optimization. These bounds however grow logarithmically in the size of the (finite) search space.
{ "cite_N": [ "@cite_13", "@cite_1", "@cite_33" ], "mid": [ "2166566250", "1998498767", "" ], "abstract": [ "Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound (GP-UCB) algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.", "Until recently, statistical theory has been restricted to the design and analysis of sampling experiments in which the size and composition of the samples are completely determined before the experimentation begins. The reasons for this are partly historical, dating back to the time when the statistician was consulted, if at all, only after the experiment was over, and partly intrinsic in the mathematical difficulty of working with anything but a fixed number of independent random variables. A major advance now appears to be in the making with the creation of a theory of the sequential design of experiments, in which the size and composition of the samples are not fixed in advance but are functions of the observations themselves.", "" ] }
1611.03981
2579690930
Semi-supervised wrapper methods are concerned with building effective supervised classifiers from partially labeled data. Though previous works have succeeded in some fields, it is still difficult to apply semi-supervised wrapper methods to practice because the assumptions those methods rely on tend to be unrealistic in practice. For practical use, this paper proposes a novel semi-supervised wrapper method, Dual Teaching, whose assumptions are easy to set up. Dual Teaching adopts two external classifiers to estimate the false positives and false negatives of the base learner. Only if the recall of every external classifier is greater than zero and the sum of the precision is greater than one, Dual Teaching will train a base learner from partially labeled data as effectively as the fully-labeled-data-trained classifier. The effectiveness of Dual Teaching is proved in both theory and practice.
@cite_2 is characterized by the strategy that the learner uses its own predictions to teach itself. It starts by training a base learner from some labeled data and then evaluates the learner on the unlabeled data. Examples along with the labels predicted by the base learner are added to the training set and then the classifier is re-trained. Because the early mistakes made by the base learner would be reinforced by inputting the false labels to the early training set, self-training assumes that predictions of the base learner tend to be correct. However, it is unrealistic in some practical scenarios that a supervised model trained form a few labeled could classify amounts of unlabeled data successfully.
{ "cite_N": [ "@cite_2" ], "mid": [ "2163568299" ], "abstract": [ "We present a simple, but surprisingly effective, method of self-training a two-phase parser-reranker system using readily available unlabeled data. We show that this type of bootstrapping is possible for parsing when the bootstrapped parses are processed by a discriminative reranker. Our improved model achieves an f-score of 92.1 , an absolute 1.1 improvement (12 error reduction) over the previous best result for Wall Street Journal parsing. Finally, we provide some analysis to better understand the phenomenon." ] }
1611.03981
2579690930
Semi-supervised wrapper methods are concerned with building effective supervised classifiers from partially labeled data. Though previous works have succeeded in some fields, it is still difficult to apply semi-supervised wrapper methods to practice because the assumptions those methods rely on tend to be unrealistic in practice. For practical use, this paper proposes a novel semi-supervised wrapper method, Dual Teaching, whose assumptions are easy to set up. Dual Teaching adopts two external classifiers to estimate the false positives and false negatives of the base learner. Only if the recall of every external classifier is greater than zero and the sum of the precision is greater than one, Dual Teaching will train a base learner from partially labeled data as effectively as the fully-labeled-data-trained classifier. The effectiveness of Dual Teaching is proved in both theory and practice.
@cite_20 is a wrapper method that works with two classifiers. The learning is initialized by training two classifiers from two separate feature sets of the labeled data. Then, one classifier make predictions on unlabeled data and then the data with their predicted labels are fed back for re-training another classifier. The two classifiers work as the above step alternately. The high performance of Co-Training relies on two independent feature-spaces. In addition, each classifier should make good predictions on unlabeled data in early generation. The above two assumptions tend to be violated in practice. First, two independent feature-spaces are difficult to be available because the diffusion of the real-world data is complicated. Second, it is difficult for the two classifiers to make good predictions on unlabeled data at the very start when they are trained from a few labeled data.
{ "cite_N": [ "@cite_20" ], "mid": [ "2128614648" ], "abstract": [ "Co-training is a method for combining labeled and unlabeled data when examples can be thought of as containing two distinct sets of features. It has had a number of practical successes, yet previous theoretical analyses have needed very strong assumptions on the data that are unlikely to be satisfied in practice. In this paper, we propose a much weaker \"expansion\" assumption on the underlying data distribution, that we prove is sufficient for iterative co-training to succeed given appropriately strong PAC-learning algorithms on each feature set, and that to some extent is necessary as well. This expansion assumption in fact motivates the iterative nature of the original co-training algorithm, unlike stronger assumptions (such as independence given the label) that allow a simpler one-shot co-training to succeed. We also heuristically analyze the effect on performance of noise in the data. Predicted behavior is qualitatively matched in synthetic experiments on expander graphs." ] }
1611.04215
2949133649
Recently, discriminatively learned correlation filters (DCF) has drawn much attention in visual object tracking community. The success of DCF is potentially attributed to the fact that a large amount of samples are utilized to train the ridge regression model and predict the location of object. To solve the regression problem in an efficient way, these samples are all generated by circularly shifting from a search patch. However, these synthetic samples also induce some negative effects which weaken the robustness of DCF based trackers. In this paper, we propose a Convolutional Regression framework for visual tracking (CRT). Instead of learning the linear regression model in a closed form, we try to solve the regression problem by optimizing a one-channel-output convolution layer with Gradient Descent (GD). In particular, the receptive field size of the convolution layer is set to the size of object. Contrary to DCF, it is possible to incorporate all "real" samples clipped from the whole image. A critical issue of the GD approach is that most of the convolutional samples are negative and the contribution of positive samples will be suppressed. To address this problem, we propose a novel Automatic Hard Negative Mining method to eliminate easy negatives and enhance positives. Extensive experiments are conducted on a widely-used benchmark with 100 sequences. The results show that the proposed algorithm achieves outstanding performance and outperforms almost all the existing DCF based algorithms.
CNN based trackers Benefiting from large scale training dataset like ImageNet @cite_2 , CNN has achieved great success in computer vision tasks like image classification and object detection. In visual tracking, it is generally impossible to train a deep CNN because of the quite limited training data. Instead, we can transfer a deep CNN like VGGNet @cite_17 trained for image classification to extract convolutional features for visual tracking. @cite_7 , both shallow and deep convolutional features extracted from a pre-trained CNN are utilized in the DCF framework. @cite_4 propose a two-stream fully convolutional network to capture both general object information and specific discriminative information for visual tracking. @cite_16 , propose an adaptive Hedge method to hedge different CNN trackers into a stronger one.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_2", "@cite_16", "@cite_17" ], "mid": [ "2211629196", "2214352687", "2117539524", "", "1686810756" ], "abstract": [ "We propose a new approach for general object tracking with fully convolutional neural network. Instead of treating convolutional neural network (CNN) as a black-box feature extractor, we conduct in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet. The discoveries motivate the design of our tracking system. It is found that convolutional layers in different levels characterize the target from different perspectives. A top layer encodes more semantic features and serves as a category detector, while a lower layer carries more discriminative information and can better separate the target from distracters with similar appearance. Both layers are jointly used with a switch mechanism during tracking. It is also found that for a tracking target, only a subset of neurons are relevant. A feature map selection method is developed to remove noisy and irrelevant feature maps, which can reduce computation redundancy and improve tracking accuracy. Extensive evaluation on the widely used tracking benchmark [36] shows that the proposed tacker outperforms the state-of-the-art significantly.", "Visual object tracking is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we exploit features extracted from deep convolutional neural networks trained on object recognition datasets to improve tracking accuracy and robustness. The outputs of the last convolutional layers encode the semantic information of targets and such representations are robust to significant appearance variations. However, their spatial resolution is too coarse to precisely localize targets. In contrast, earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchies of convolutional layers as a nonlinear counterpart of an image pyramid representation and exploit these multiple levels of abstraction for visual tracking. Specifically, we adaptively learn correlation filters on each convolutional layer to encode the target appearance. We hierarchically infer the maximum response of each layer to locate targets. Extensive experimental results on a largescale benchmark dataset show that the proposed algorithm performs favorably against state-of-the-art methods.", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1611.03999
2564098318
Several applications require demographic information of ordinary people in unconstrained scenarios. This is not a trivial task due to significant human appearance variations. In this work, we introduce trixels for clustering image regions, enumerating their advantages compared to superpixels. The classical GrabCut algorithm is later modified to segment trixels instead of pixels in an unsupervised context. Combining with face detection lead us to a clothes segmentation approach close to real time. The study uses the challenging Pascal VOC dataset for segmentation evaluation experiments. A final experiment analyzes the fusion of clothes features with state-of-the-art gender classifiers in ClothesDB, revealing a significant performance improvement in gender classification.
Besides, several works have been carried out on body-based GC. Thus, @cite_20 achieved an accuracy of 75 @cite_9 improving Cao's previous accuracy up to a 80.62 by @cite_10 outperforms previous authors. However, their evaluated dataset is reduced and not balanced.
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_20" ], "mid": [ "", "2128560777", "2044405949" ], "abstract": [ "", "We propose a method for recognizing attributes, such as the gender, hair style and types of clothes of people under large variation in viewpoint, pose, articulation and occlusion typical of personal photo album images. Robust attribute classifiers under such conditions must be invariant to pose, but inferring the pose in itself is a challenging problem. We use a part-based approach based on poselets. Our parts implicitly decompose the aspect (the pose and viewpoint). We train attribute classifiers for each such aspect and we combine them together in a discriminative model. We propose a new dataset of 8000 people with annotated attributes. Our method performs very well on this dataset, significantly outperforming a baseline built on the spatial pyramid match kernel method. On gender recognition we outperform a commercial face recognition system.", "This paper studies the problem of recognizing gender from full body images. This problem has not been addressed before, partly because of the variant nature of human bodies and clothing that can bring tough difficulties. However, gender recognition has high application potentials, e.g. security surveillance and customer statistics collection in restaurants, supermarkets, and even building entrances. In this paper, we build a system of recognizing gender from full body images, taken from frontal or back views. Our contributions are three-fold. First, to handle the variety of human body characteristics, we represent each image by a collection of patch features, which model different body parts and provide a set of clues for gender recognition. To combine the clues, we build an ensemble learning algorithm from those body parts to recognize gender from fixed view body images (frontal or back). Second, we relax the fixed view constraint and show the possibility to train a flexible classifier for mixed view images with the almost same accuracy as the fixed view case. At last, our approach is shown to be robust to small alignment errors, which is preferred in many applications." ] }
1611.04125
2567619939
Joint representation learning of text and knowledge within a unified semantic space enables us to perform knowledge graph completion more accurately. In this work, we propose a novel framework to embed words, entities and relations into the same continuous vector space. In this model, both entity and relation embeddings are learned by taking knowledge graph and plain text into consideration. In experiments, we evaluate the joint learning model on three tasks including entity prediction, relation prediction and relation classification from text. The experiment results show that our model can significantly and consistently improve the performance on the three tasks as compared with other baselines.
A variety of approaches have been proposed to encode both entities and relations into a continuous low-dimensional space. Inspired by @cite_10 , TransE @cite_24 regards the relation @math in each ( @math , @math , @math ) as a translation from @math to @math within the low-dimensional space, i.e., @math , where @math and @math are entity embeddings and @math is relation embedding. Despite of its simplicity, TransE achieves the state-of-the-art performance of representation learning for KGs, especially for those large-scale and sparse KGs. Hence, we simply incorporate TransE in our method to handle representation learning for KGs.
{ "cite_N": [ "@cite_24", "@cite_10" ], "mid": [ "2127795553", "2950133940" ], "abstract": [ "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible." ] }
1611.04125
2567619939
Joint representation learning of text and knowledge within a unified semantic space enables us to perform knowledge graph completion more accurately. In this work, we propose a novel framework to embed words, entities and relations into the same continuous vector space. In this model, both entity and relation embeddings are learned by taking knowledge graph and plain text into consideration. In experiments, we evaluate the joint learning model on three tasks including entity prediction, relation prediction and relation classification from text. The experiment results show that our model can significantly and consistently improve the performance on the three tasks as compared with other baselines.
Note that, our method is also flexible to incorporate extension models of TransE, such as TransH @cite_0 and TransR @cite_17 , which is not the focus of this paper and will be left as our future work.
{ "cite_N": [ "@cite_0", "@cite_17" ], "mid": [ "2283196293", "2184957013" ], "abstract": [ "We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.", "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction." ] }
1611.04125
2567619939
Joint representation learning of text and knowledge within a unified semantic space enables us to perform knowledge graph completion more accurately. In this work, we propose a novel framework to embed words, entities and relations into the same continuous vector space. In this model, both entity and relation embeddings are learned by taking knowledge graph and plain text into consideration. In experiments, we evaluate the joint learning model on three tasks including entity prediction, relation prediction and relation classification from text. The experiment results show that our model can significantly and consistently improve the performance on the three tasks as compared with other baselines.
Many works aim to extract relational facts from large-scale text corpora @cite_6 @cite_25 . This indicates textual relations between entities are contained in plain text. In recent years, deep neural models such as convolutional neural networks (CNN) have been proposed to encode semantics of sentences to identify relations between entities @cite_18 @cite_9 . As compared to conventional models, neural models are capable to accurately capture textual relations between entities from text sequences without explicitly linguistic analysis, and further encode into continuous vector space. Hence, in this work we apply CNN to embed textual relations and conduct joint learning of text and KGs with respect to relations.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_25", "@cite_6" ], "mid": [ "2155454737", "", "1604644367", "2107598941" ], "abstract": [ "Relation classification is an important semantic processing task for which state-ofthe-art systems still rely on costly handcrafted features. In this work we tackle the relation classification task using a convolutional neural network that performs classification by ranking (CR-CNN). We propose a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes. We perform experiments using the the SemEval-2010 Task 8 dataset, which is designed for the task of classifying the relationship between two nominals marked in a sentence. Using CRCNN, we outperform the state-of-the-art for this dataset and achieve a F1 of 84.1 without using any costly handcrafted features. Additionally, our experimental results show that: (1) our approach is more effective than CNN followed by a softmax classifier; (2) omitting the representation of the artificial class Other improves both precision and recall; and (3) using only word embeddings as input features is enough to achieve state-of-the-art results if we consider only the text between the two target nominals.", "", "Several recent works on relation extraction have been applying the distant supervision paradigm: instead of relying on annotated text to learn how to predict relations, they employ existing knowledge bases (KBs) as source of supervision. Crucially, these approaches are trained based on the assumption that each sentence which mentions the two related entities is an expression of the given relation. Here we argue that this leads to noisy patterns that hurt precision, in particular if the knowledge base is not directly related to the text we are working with. We present a novel approach to distant supervision that can alleviate this problem based on the following two ideas: First, we use a factor graph to explicitly model the decision whether two entities are related, and the decision whether this relation is mentioned in a given sentence; second, we apply constraint-driven semi-supervision to train this model without any knowledge about which sentences express the relations in our training KB. We apply our approach to extract relations from the New York Times corpus and use Freebase as knowledge base. When compared to a state-of-the-art approach for relation extraction under distant supervision, we achieve 31 error reduction.", "Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6 . We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression." ] }
1611.04125
2567619939
Joint representation learning of text and knowledge within a unified semantic space enables us to perform knowledge graph completion more accurately. In this work, we propose a novel framework to embed words, entities and relations into the same continuous vector space. In this model, both entity and relation embeddings are learned by taking knowledge graph and plain text into consideration. In experiments, we evaluate the joint learning model on three tasks including entity prediction, relation prediction and relation classification from text. The experiment results show that our model can significantly and consistently improve the performance on the three tasks as compared with other baselines.
Many neural models such as recurrent neural networks (RNN) @cite_13 and long-short term memory networks (LSTM) @cite_22 have also been explored for relation extraction. These models can also be applied to perform representation learning for textual relations, which will be explored in future work.
{ "cite_N": [ "@cite_13", "@cite_22" ], "mid": [ "1838058638", "2964217331" ], "abstract": [ "Deep learning has gained much success in sentence-level relation classification. For example, convolutional neural networks (CNN) have delivered competitive performance without much effort on feature engineering as the conventional pattern-based methods. Thus a lot of works have been produced based on CNN structures. However, a key issue that has not been well addressed by the CNN-based method is the lack of capability to learn temporal features, especially long-distance dependency between nominal pairs. In this paper, we propose a simple framework based on recurrent neural networks (RNN) and compare it with CNN-based model. To show the limitation of popular used SemEval-2010 Task 8 dataset, we introduce another dataset refined from MIMLRE(, 2014). Experiments on two different datasets strongly indicates that the RNN-based model can deliver better performance on relation classification, and it is particularly capable of learning long-distance relation patterns. This makes it suitable for real-world applications where complicated expressions are often involved.", "Relation classification is an important research arena in the field of natural language processing (NLP). In this paper, we present SDP-LSTM, a novel neural network to classify the relation of two entities in a sentence. Our neural architecture leverages the shortest dependency path (SDP) between two entities; multichannel recurrent neural networks, with long short term memory (LSTM) units, pick up heterogeneous information along the SDP. Our proposed model has several distinct features: (1) The shortest dependency paths retain most relevant information (to relation classification), while eliminating irrelevant words in the sentence. (2) The multichannel LSTM networks allow effective information integration from heterogeneous sources over the dependency paths. (3) A customized dropout strategy regularizes the neural network to alleviate overfitting. We test our model on the SemEval 2010 relation classification task, and achieve an F1-score of 83.7 , higher than competing methods in the literature." ] }
1611.04209
2565716838
We study the Moran process as adapted by Lieberman, Hauert and Nowak. This is a model of an evolving population on a graph where certain individuals, called "mutants" have fitness r and other individuals, called "non-mutants" have fitness 1. We focus on the situation where the mutation is advantageous, in the sense that r > 1. A family of directed graphs is said to be strongly amplifying if the extinction probability tends to 0 when the Moran process is run on graphs in this family. The most-amplifying known family of directed graphs is the family of megastars of We show that this family is optimal, up to logarithmic factors, since every strongly-connected n-vertex digraph has extinction probability Omega(n^(-1 2)). Next, we show that there is an infinite family of undirected graphs, called dense incubators, whose extinction probability is O(n^(-1 3)). We show that this is optimal, up to constant factors. Finally, we introduce sparse incubators, for varying edge density, and show that the extinction probability of these graphs is O(n m), where m is the number of edges. Again, we show that this is optimal, up to constant factors.
The best-known lower bounds on the extinction probability of connected undirected graphs are in @cite_8 @cite_13 . Theorem 1 of @cite_9 shows that there is a constant @math such that for every @math the extinction probability is at least @math .
{ "cite_N": [ "@cite_9", "@cite_13", "@cite_8" ], "mid": [ "2205121374", "", "2568440864" ], "abstract": [ "This work extends what is known so far for a basic model of evolutionary antagonism in undirected networks (graphs). More specifically, this work studies the generalized Moran process, as introduced by Lieberman, Hauert, and Nowak [Nature, 433:312-316, 2005], where the individuals of a population reside on the vertices of an undirected connected graph. The initial population has a single mutant of a fitness value r (typically r>1), residing at some vertex v of the graph, while every other vertex is initially occupied by an individual of fitness 1. At every step of this process, an individual (i.e. vertex) is randomly chosen for reproduction with probability proportional to its fitness, and then it places a copy of itself on a random neighbor, thus replacing the individual that was residing there. The main quantity of interest is the fixation probability, i.e. the probability that eventually the whole graph is occupied by descendants of the mutant. In this work we concentrate on the fixation probability when the mutant is initially on a specific vertex v, thus refining the older notion of which studied the fixation probability when the initial mutant is placed at a random vertex. We then aim at finding graphs that have many \"strong starts\" (or many \"weak starts\") for the mutant. Thus we introduce a parameterized notion of selective amplifiers (resp. selective suppressors) of evolution. We prove the existence of strong selective amplifiers (i.e. for h(n)=Θ(n) vertices v the fixation probability of v is at least @math for a function c(r) that depends only on r), and the existence of quite strong selective suppressors. Regarding the traditional notion of fixation probability from a random start, we provide strong upper and lower bounds: first we demonstrate the non-existence of \"strong universal\" amplifiers, and second we prove the Thermal Theorem which states that for any undirected graph, when the mutant starts at vertex v, the fixation probability at least @math . This theorem (which extends the \"Isothermal Theorem\" of for regular graphs) implies an almost tight lower bound for the usual notion of fixation probability. Our proof techniques are original and are based on new domination arguments which may be of general interest in Markov Processes that are of the general birth-death type.", "", "Evolutionary dynamics has been traditionally studied in the context of homogeneous populations, mainly described by the Moran process [P. Moran, Random processes in genetics, Proceedings of the Cambridge Philosophical Society 54 (1) (1958) 60-71]. Recently, this approach has been generalized in [E. Lieberman, C. Hauert, M.A. Nowak, Evolutionary dynamics on graphs, Nature 433 (2005) 312-316] by arranging individuals on the nodes of a network (in general, directed). In this setting, the existence of directed arcs enables the simulation of extreme phenomena, where the fixation probability of a randomly placed mutant (i.e., the probability that the offspring of the mutant eventually spread over the whole population) is arbitrarily small or large. On the other hand, undirected networks (i.e., undirected graphs) seem to have a smoother behavior, and thus it is more challenging to find suppressors amplifiers of selection, that is, graphs with smaller greater fixation probability than the complete graph (i.e., the homogeneous population). In this paper we focus on undirected graphs. We present the first class of undirected graphs which act as suppressors of selection, by achieving a fixation probability that is at most one half of that of the complete graph, as the number of vertices increases. Moreover, we provide some generic upper and lower bounds for the fixation probability of general undirected graphs. As our main contribution, we introduce the natural alternative of the model proposed in [E. Lieberman, C. Hauert, M.A. Nowak, Evolutionary dynamics on graphs, Nature 433 (2005) 312-316]. In our new evolutionary model, all individuals interact simultaneously and the result is a compromise between aggressive and non-aggressive individuals. We prove that our new model of mutual influences admits a potential function, which guarantees the convergence of the system for any graph topology and any initial fitness vector of the individuals. Furthermore, we prove fast convergence to the stable state for the case of the complete graph, as well as we provide almost tight bounds on the limit fitness of the individuals. Apart from being important on its own, this new evolutionary model appears to be useful also in the abstract modeling of control mechanisms over invading populations in networks. We demonstrate this by introducing and analyzing two alternative control approaches, for which we bound the time needed to stabilize to the ''healthy'' state of the system." ] }
1611.04209
2565716838
We study the Moran process as adapted by Lieberman, Hauert and Nowak. This is a model of an evolving population on a graph where certain individuals, called "mutants" have fitness r and other individuals, called "non-mutants" have fitness 1. We focus on the situation where the mutation is advantageous, in the sense that r > 1. A family of directed graphs is said to be strongly amplifying if the extinction probability tends to 0 when the Moran process is run on graphs in this family. The most-amplifying known family of directed graphs is the family of megastars of We show that this family is optimal, up to logarithmic factors, since every strongly-connected n-vertex digraph has extinction probability Omega(n^(-1 2)). Next, we show that there is an infinite family of undirected graphs, called dense incubators, whose extinction probability is O(n^(-1 3)). We show that this is optimal, up to constant factors. Finally, we introduce sparse incubators, for varying edge density, and show that the extinction probability of these graphs is O(n m), where m is the number of edges. Again, we show that this is optimal, up to constant factors.
While this manuscript was under preparation, George Giakkoupis posted simultaneous, independent work @cite_7 also showing that strong undirected amplifiers exist. In the remainder of this section, we discuss this work.
{ "cite_N": [ "@cite_7" ], "mid": [ "1798337851" ], "abstract": [ "Let λλ be the second largest eigenvalue in absolute value of a uniform random dd-regular graph on nn vertices. It was famously conjectured by Alon and proved by Friedman that if dd is fixed independent of nn, then λ=2d−1−−−−√+o(1)λ=2d−1+o(1) with high probability. In the present work, we show that λ=O(d−−√)λ=O(d) continues to hold with high probability as long as d=O(n2 3)d=O(n2 3), making progress toward a conjecture of Vu that the bound holds for all 1≤d≤n 21≤d≤n 2. Prior to this work the best result was obtained by Broder, Frieze, Suen and Upfal (1999) using the configuration model, which hits a barrier at d=o(n1 2)d=o(n1 2). We are able to go beyond this barrier by proving concentration of measure results directly for the uniform distribution on dd-regular graphs. These come as consequences of advances we make in the theory of concentration by size biased couplings. Specifically, we obtain Bennett-type tail estimates for random variables admitting certain unbounded size biased couplings." ] }
1611.04209
2565716838
We study the Moran process as adapted by Lieberman, Hauert and Nowak. This is a model of an evolving population on a graph where certain individuals, called "mutants" have fitness r and other individuals, called "non-mutants" have fitness 1. We focus on the situation where the mutation is advantageous, in the sense that r > 1. A family of directed graphs is said to be strongly amplifying if the extinction probability tends to 0 when the Moran process is run on graphs in this family. The most-amplifying known family of directed graphs is the family of megastars of We show that this family is optimal, up to logarithmic factors, since every strongly-connected n-vertex digraph has extinction probability Omega(n^(-1 2)). Next, we show that there is an infinite family of undirected graphs, called dense incubators, whose extinction probability is O(n^(-1 3)). We show that this is optimal, up to constant factors. Finally, we introduce sparse incubators, for varying edge density, and show that the extinction probability of these graphs is O(n m), where m is the number of edges. Again, we show that this is optimal, up to constant factors.
First, consider the model of Lieberman, Hauert and Nowak @cite_14 @cite_16 which we study. Our Theorem shows that there is an infinite family of connected graphs @math with @math . Theorem 1 of @cite_7 is similar, but weaker by a logarithmic factor --- that paper constructs a (similar) family with extinction probability @math . Our Theorem shows that any connected @math -vertex graph (with @math ) has @math . Theorem 2 of @cite_7 is similar, but weaker by a @math factor --- that paper shows that the extinction probability @math is @math .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_16" ], "mid": [ "2079460424", "1798337851", "" ], "abstract": [ "Evolutionary dynamics have been traditionally studied in the context of homogeneous or spatially extended populations1,2,3,4. Here we generalize population structure by arranging individuals on a graph. Each vertex represents an individual. The weighted edges denote reproductive rates which govern how often individuals place offspring into adjacent vertices. The homogeneous population, described by the Moran process3, is the special case of a fully connected graph with evenly weighted edges. Spatial structures are described by graphs where vertices are connected with their nearest neighbours. We also explore evolution on random and scale-free networks5,6,7. We determine the fixation probability of mutants, and characterize those graphs for which fixation behaviour is identical to that of a homogeneous population7. Furthermore, some graphs act as suppressors and others as amplifiers of selection. It is even possible to find graphs that guarantee the fixation of any advantageous mutant. We also study frequency-dependent selection and show that the outcome of evolutionary games can depend entirely on the structure of the underlying graph. Evolutionary graph theory has many fascinating applications ranging from ecology to multi-cellular organization and economics.", "Let λλ be the second largest eigenvalue in absolute value of a uniform random dd-regular graph on nn vertices. It was famously conjectured by Alon and proved by Friedman that if dd is fixed independent of nn, then λ=2d−1−−−−√+o(1)λ=2d−1+o(1) with high probability. In the present work, we show that λ=O(d−−√)λ=O(d) continues to hold with high probability as long as d=O(n2 3)d=O(n2 3), making progress toward a conjecture of Vu that the bound holds for all 1≤d≤n 21≤d≤n 2. Prior to this work the best result was obtained by Broder, Frieze, Suen and Upfal (1999) using the configuration model, which hits a barrier at d=o(n1 2)d=o(n1 2). We are able to go beyond this barrier by proving concentration of measure results directly for the uniform distribution on dd-regular graphs. These come as consequences of advances we make in the theory of concentration by size biased couplings. Specifically, we obtain Bennett-type tail estimates for random variables admitting certain unbounded size biased couplings.", "" ] }
1611.04209
2565716838
We study the Moran process as adapted by Lieberman, Hauert and Nowak. This is a model of an evolving population on a graph where certain individuals, called "mutants" have fitness r and other individuals, called "non-mutants" have fitness 1. We focus on the situation where the mutation is advantageous, in the sense that r > 1. A family of directed graphs is said to be strongly amplifying if the extinction probability tends to 0 when the Moran process is run on graphs in this family. The most-amplifying known family of directed graphs is the family of megastars of We show that this family is optimal, up to logarithmic factors, since every strongly-connected n-vertex digraph has extinction probability Omega(n^(-1 2)). Next, we show that there is an infinite family of undirected graphs, called dense incubators, whose extinction probability is O(n^(-1 3)). We show that this is optimal, up to constant factors. Finally, we introduce sparse incubators, for varying edge density, and show that the extinction probability of these graphs is O(n m), where m is the number of edges. Again, we show that this is optimal, up to constant factors.
Our paper is otherwise incomparable to @cite_7 . We give a lower bound on the extinction probability of amplifying (Theorem ) but @cite_7 does not consider digraphs. We also construct sparse families of incubators (Theorem ) which go all the way down to constant density and are optimally-amplifying up to constant factors (Theorem ) but @cite_7 does not consider sparse graphs. On the other hand, [Theorem 3] George constructs a family of with extinction probability at least @math , which is something that we do not study here. Finally, @cite_4 have introduced a variant of the model in which the fitness of a mutant is taken to be a function of the number of vertices of the underlying digraph (so as the number of vertices in the digraph grows, the fitness of each individual mutant decreases). The results of @cite_7 extend to this model where @math , as a function of @math . We are not aware of any applications of this model, and we don't consider it.
{ "cite_N": [ "@cite_4", "@cite_7" ], "mid": [ "2011047110", "1798337851" ], "abstract": [ "We study simple interacting particle systems on heterogeneous networks, including the voter model and the invasion process. These are both two-state models in which in an update event an individual changes state to agree with a neighbor. For the voter model, an individual imports'' its state from a randomly chosen neighbor. Here the average time @math to reach consensus for a network of @math nodes with an uncorrelated degree distribution scales as @math , where @math is the @math moment of the degree distribution. Quick consensus thus arises on networks with broad degree distributions. We also identify the conservation law that characterizes the route by which consensus is reached. Parallel results are derived for the invasion process, in which the state of an agent is exported'' to a random neighbor. We further generalize to biased dynamics in which one state is favored. The probability for a single fitter mutant located at a node of degree @math to overspread the population the fixation probability is proportional to @math for the voter model and to @math for the invasion process.", "Let λλ be the second largest eigenvalue in absolute value of a uniform random dd-regular graph on nn vertices. It was famously conjectured by Alon and proved by Friedman that if dd is fixed independent of nn, then λ=2d−1−−−−√+o(1)λ=2d−1+o(1) with high probability. In the present work, we show that λ=O(d−−√)λ=O(d) continues to hold with high probability as long as d=O(n2 3)d=O(n2 3), making progress toward a conjecture of Vu that the bound holds for all 1≤d≤n 21≤d≤n 2. Prior to this work the best result was obtained by Broder, Frieze, Suen and Upfal (1999) using the configuration model, which hits a barrier at d=o(n1 2)d=o(n1 2). We are able to go beyond this barrier by proving concentration of measure results directly for the uniform distribution on dd-regular graphs. These come as consequences of advances we make in the theory of concentration by size biased couplings. Specifically, we obtain Bennett-type tail estimates for random variables admitting certain unbounded size biased couplings." ] }
1611.03608
2570082859
In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems. As a gradient-based optimization method, GSA makes use of the information from the minimizer of a single sample's loss function, and takes average strategy to calculate reasonable learning rate sequence. While most existing gradient-based algorithms introduce an increasing number of hyper parameters or try to make a trade-off between computational cost and convergence rate, GSA avoids the manual tuning of learning rate and brings in no more hyper parameters or extra cost. We perform exhaustive numerical experiments for logistic and softmax regression to compare our method with the other state of the art ones on 16 datasets. Results show that GSA is robust on various scenarios.
Denote @math the unique minimizer of @math . It has been proved that the full gradient method achieves a linear convergence rate: @math where @math is a constant depending on the condition number of @math @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2124541940" ], "abstract": [ "It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name \"polynomial-time interior-point methods\", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12]." ] }
1611.03608
2570082859
In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems. As a gradient-based optimization method, GSA makes use of the information from the minimizer of a single sample's loss function, and takes average strategy to calculate reasonable learning rate sequence. While most existing gradient-based algorithms introduce an increasing number of hyper parameters or try to make a trade-off between computational cost and convergence rate, GSA avoids the manual tuning of learning rate and brings in no more hyper parameters or extra cost. We perform exhaustive numerical experiments for logistic and softmax regression to compare our method with the other state of the art ones on 16 datasets. Results show that GSA is robust on various scenarios.
The Gradient Averaging method @cite_0 is equivalent to the Momentum mentioned above, if we choose the simple arithmetic average to substitute the weighted average in Momentum. The Gradient Averaging method is proved to have a @math convergence rate, the same as SGD. Its scheme has form @math
{ "cite_N": [ "@cite_0" ], "mid": [ "2169713291" ], "abstract": [ "In this paper we present a new approach for constructing subgradient schemes for different types of nonsmooth problems with convex structure. Our methods are primal-dual since they are always able to generate a feasible approximation to the optimum of an appropriately formulated dual problem. Besides other advantages, this useful feature provides the methods with a reliable stopping criterion. The proposed schemes differ from the classical approaches (divergent series methods, mirror descent methods) by presence of two control sequences. The first sequence is responsible for aggregating the support functions in the dual space, and the second one establishes a dynamically updated scale between the primal and dual spaces. This additional flexibility allows to guarantee a boundedness of the sequence of primal test points even in the case of unbounded feasible set (however, we always assume the uniform boundedness of subgradients). We present the variants of subgradient schemes for nonsmooth convex minimization, minimax problems, saddle point problems, variational inequalities, and stochastic optimization. In all situations our methods are proved to be optimal from the view point of worst-case black-box lower complexity bounds." ] }
1611.03608
2570082859
In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems. As a gradient-based optimization method, GSA makes use of the information from the minimizer of a single sample's loss function, and takes average strategy to calculate reasonable learning rate sequence. While most existing gradient-based algorithms introduce an increasing number of hyper parameters or try to make a trade-off between computational cost and convergence rate, GSA avoids the manual tuning of learning rate and brings in no more hyper parameters or extra cost. We perform exhaustive numerical experiments for logistic and softmax regression to compare our method with the other state of the art ones on 16 datasets. Results show that GSA is robust on various scenarios.
The idea behind Momentum and Gradient Averaging is to utilize previous gradient information to determine a better descent direction to accelerate convergence. However, rather than averaging the gradients, the previous iterative points @math can also be taken into account. Some authors use the basic SG iteration but take an average over @math values to give a new algorithm. With a suitable choice of step-sizes, this gives the same asymptotic efficiency as Newton-like second-order SG methods and also leads to increased robustness of the convergence rate to the exact sequence of step sizes @cite_3 . The update scheme reads @math @math The Iterate Averaging method uses all the previous iterative points and take its average as the next searching point. It has been proved that under certain assumptions of appropriate step-size, this method enjoys a second-order convergence rate @cite_3 . Even for a fixed step-size strategy, it can also show a great robustness to avoid oscillations. But unfortunately, it is extremely sensible to the initial points: a bad starting point can not only hinder the convergence rate but also cause divergence. Besides, it also requires an extra @math memory cost as Momentum.
{ "cite_N": [ "@cite_3" ], "mid": [ "2086161653" ], "abstract": [ "A new recursive algorithm of stochastic approximation type with the averaging of trajectories is investigated. Convergence with probability one is proved for a variety of classical optimization and identification problems. It is also demonstrated for these problems that the proposed algorithm achieves the highest possible rate of convergence." ] }
1611.03608
2570082859
In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems. As a gradient-based optimization method, GSA makes use of the information from the minimizer of a single sample's loss function, and takes average strategy to calculate reasonable learning rate sequence. While most existing gradient-based algorithms introduce an increasing number of hyper parameters or try to make a trade-off between computational cost and convergence rate, GSA avoids the manual tuning of learning rate and brings in no more hyper parameters or extra cost. We perform exhaustive numerical experiments for logistic and softmax regression to compare our method with the other state of the art ones on 16 datasets. Results show that GSA is robust on various scenarios.
A typical SAG computes the full gradients at the beginning of iteration, in each following step it chooses a sample's gradient randomly to refresh the full gradients. The update scheme reads @math where the @math denotes the randomly chosen index, and . Like FG method, SAG incorporates a gradient with respect to each training sample, but like SGD, in each iteration it only computes the gradient with respect to a single training example and the cost of the iteration is independent of @math . In @cite_20 , the authors show that the SAG iterations have a linear convergence rate. However SAG have at least 2 drawbacks. First it involves the evaluation of full gradient.Even though it calls for full gradient only once, it is hard to implement in some scenarios. Maybe this difficulty can be resolved by arbitrarily choose the initial gradients for each sample(e.g. zero vectors), but the second weakness makes it completely infeasible in certain scenarios: SAG has an extremely large memory cost @math because it has to store previous gradient for each sample. There is no free lunch. These are prices we have to pay for linear convergence.
{ "cite_N": [ "@cite_20" ], "mid": [ "2105875671" ], "abstract": [ "We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training error and reducing the test error quickly." ] }
1611.03608
2570082859
In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems. As a gradient-based optimization method, GSA makes use of the information from the minimizer of a single sample's loss function, and takes average strategy to calculate reasonable learning rate sequence. While most existing gradient-based algorithms introduce an increasing number of hyper parameters or try to make a trade-off between computational cost and convergence rate, GSA avoids the manual tuning of learning rate and brings in no more hyper parameters or extra cost. We perform exhaustive numerical experiments for logistic and softmax regression to compare our method with the other state of the art ones on 16 datasets. Results show that GSA is robust on various scenarios.
Stochastic variance reduced gradient(SVRG) introduces an explicit variance reduction method for SGD @cite_17 . SVRG method separates the training process into several epochs, in the beginning of each epoch, it requires the computation of full gradient. And during each epoch, one randomly chooses a sample's gradient to refresh the full gradient. The update scheme reads @math where @math is the full gradient, @math is updated at the beginning of each epoch.
{ "cite_N": [ "@cite_17" ], "mid": [ "2107438106" ], "abstract": [ "Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning." ] }
1611.03608
2570082859
In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems. As a gradient-based optimization method, GSA makes use of the information from the minimizer of a single sample's loss function, and takes average strategy to calculate reasonable learning rate sequence. While most existing gradient-based algorithms introduce an increasing number of hyper parameters or try to make a trade-off between computational cost and convergence rate, GSA avoids the manual tuning of learning rate and brings in no more hyper parameters or extra cost. We perform exhaustive numerical experiments for logistic and softmax regression to compare our method with the other state of the art ones on 16 datasets. Results show that GSA is robust on various scenarios.
SVRG also has a linear convergence rate, but there are at least 3 parameters to tune: the number of epoch, the iteration number in each epoch and the learning rate. Moreover, SVRG has to evaluate full gradient several times, which will also restricts its application in large scale context. Another variation of SVRG is SAGA @cite_18 which is claimed to support non-strongly convex problems directly and has a better convergence rate. It is essentially at the midpoint between SVRG and SAG.
{ "cite_N": [ "@cite_18" ], "mid": [ "2952215077" ], "abstract": [ "In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method." ] }
1611.03608
2570082859
In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems. As a gradient-based optimization method, GSA makes use of the information from the minimizer of a single sample's loss function, and takes average strategy to calculate reasonable learning rate sequence. While most existing gradient-based algorithms introduce an increasing number of hyper parameters or try to make a trade-off between computational cost and convergence rate, GSA avoids the manual tuning of learning rate and brings in no more hyper parameters or extra cost. We perform exhaustive numerical experiments for logistic and softmax regression to compare our method with the other state of the art ones on 16 datasets. Results show that GSA is robust on various scenarios.
Stochastically controlled stochastic gradient(SCSG) is a variation of SVRG. As a great improvement of SVRG, the computation cost and the communication cost of SCSG do not necessarily scale linearly with sample size @math @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "2519604671" ], "abstract": [ "We develop and analyze a procedure for gradient-based optimization that we refer to as stochastically controlled stochastic gradient (SCSG). As a member of the SVRG family of algorithms, SCSG makes use of gradient estimates at two scales, with the number of updates at the faster scale being governed by a geometric random variable. Unlike most existing algorithms in this family, both the computation cost and the communication cost of SCSG do not necessarily scale linearly with the sample size @math ; indeed, these costs are independent of @math when the target accuracy is low. An experimental evaluation on real datasets confirms the effectiveness of SCSG." ] }
1611.03608
2570082859
In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems. As a gradient-based optimization method, GSA makes use of the information from the minimizer of a single sample's loss function, and takes average strategy to calculate reasonable learning rate sequence. While most existing gradient-based algorithms introduce an increasing number of hyper parameters or try to make a trade-off between computational cost and convergence rate, GSA avoids the manual tuning of learning rate and brings in no more hyper parameters or extra cost. We perform exhaustive numerical experiments for logistic and softmax regression to compare our method with the other state of the art ones on 16 datasets. Results show that GSA is robust on various scenarios.
Adagrad @cite_7 is an algorithm for gradient-based optimization, it adapts the learning rate to the parameters, performing larger updates for infrequent and smaller updates for frequent parameters. For this reason, it is well-suited for dealing with sparse data. Adagrad can greatly improve the robustness of SGD because it allocates heterogeneous learning rates on different components of @math at each iteration step. The update scheme reads @math @math here is a diagonal matrix where each diagonal element is the sum of the squares of the gradients with respect component- @math up to iteration step- @math , and @math us a smoothing term that avoids division by zero.
{ "cite_N": [ "@cite_7" ], "mid": [ "2146502635" ], "abstract": [ "We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms." ] }
1611.03608
2570082859
In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems. As a gradient-based optimization method, GSA makes use of the information from the minimizer of a single sample's loss function, and takes average strategy to calculate reasonable learning rate sequence. While most existing gradient-based algorithms introduce an increasing number of hyper parameters or try to make a trade-off between computational cost and convergence rate, GSA avoids the manual tuning of learning rate and brings in no more hyper parameters or extra cost. We perform exhaustive numerical experiments for logistic and softmax regression to compare our method with the other state of the art ones on 16 datasets. Results show that GSA is robust on various scenarios.
Adadelta @cite_21 is an extension of Adagrad that seeks to reduce its aggressive, monotonically decreasing learning rate. Instead of accumulating all past squared gradients, Adadelta restricts the window of accumulated past gradients to some fixed size @math . Instead of inefficiently storing @math previous squared gradients, the sum of gradients is recursively defined as a decaying average of all past squared gradients. The running average @math at time step @math then depends (as a fraction @math similarly to the Momentum term) only on the previous average and the current gradient: And then they define another exponentially decaying average, this time not of squared gradients but of squared parameter updates: And the update scheme reads With Adadelta, we do not need to set a default learning rate since it has been eliminated from the update rule. The weakness of Adadelta, the same as Adagrad, is that it needs extra @math memory cost since each component has a different learning rate.
{ "cite_N": [ "@cite_21" ], "mid": [ "6908809" ], "abstract": [ "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment." ] }
1611.03383
2951392118
We introduce a conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes. One code summarizes the specified factors of variation associated with the labels. The other summarizes the remaining unspecified variability. During training, the only available source of supervision comes from our ability to distinguish among different observations belonging to the same class. Examples of such observations include images of a set of labeled objects captured at different viewpoints, or recordings of set of speakers dictating multiple phrases. In both instances, the intra-class diversity is the source of the unspecified factors of variation: each object is observed at multiple viewpoints, and each speaker dictates multiple phrases. Learning to disentangle the specified factors from the unspecified ones becomes easier when strong supervision is possible. Suppose that during training, we have access to pairs of images, where each pair shows two different objects captured from the same viewpoint. This source of alignment allows us to solve our task using existing methods. However, labels for the unspecified factors are usually unavailable in realistic scenarios where data acquisition is not strictly controlled. We address the problem of disentanglement in this more general setting by combining deep convolutional autoencoders with a form of adversarial training. Both factors of variation are implicitly captured in the organization of the learned embedding space, and can be used for solving single-image analogies. Experimental results on synthetic and real datasets show that the proposed method is capable of generalizing to unseen classes and intra-class variabilities.
The work in @cite_23 @cite_7 further explores the application of content and style disentanglement to computer graphics. Whereas computer graphics involves going from an abstract description of a scene to a rendering, these methods learn to go backward from the rendering to recover the abstract description. This description can include attributes such as orientation and lighting information. While these methods are capable of producing impressive results, they benefit from being able to use synthetic data, making strong supervision possible.
{ "cite_N": [ "@cite_7", "@cite_23" ], "mid": [ "2953255770", "1893585201" ], "abstract": [ "This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a model that learns an interpretable representation of images. This representation is disentangled with respect to transformations such as out-of-plane rotations and lighting variations. The DC-IGN model is composed of multiple layers of convolution and de-convolution operators and is trained using the Stochastic Gradient Variational Bayes (SGVB) algorithm. We propose a training procedure to encourage neurons in the graphics code layer to represent a specific transformation (e.g. pose or light). Given a single input image, our model can generate new images of the same object with variations in pose and lighting. We present qualitative and quantitative results of the model's efficacy at learning a 3D rendering engine.", "We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task." ] }
1611.03383
2951392118
We introduce a conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes. One code summarizes the specified factors of variation associated with the labels. The other summarizes the remaining unspecified variability. During training, the only available source of supervision comes from our ability to distinguish among different observations belonging to the same class. Examples of such observations include images of a set of labeled objects captured at different viewpoints, or recordings of set of speakers dictating multiple phrases. In both instances, the intra-class diversity is the source of the unspecified factors of variation: each object is observed at multiple viewpoints, and each speaker dictates multiple phrases. Learning to disentangle the specified factors from the unspecified ones becomes easier when strong supervision is possible. Suppose that during training, we have access to pairs of images, where each pair shows two different objects captured from the same viewpoint. This source of alignment allows us to solve our task using existing methods. However, labels for the unspecified factors are usually unavailable in realistic scenarios where data acquisition is not strictly controlled. We address the problem of disentanglement in this more general setting by combining deep convolutional autoencoders with a form of adversarial training. Both factors of variation are implicitly captured in the organization of the learned embedding space, and can be used for solving single-image analogies. Experimental results on synthetic and real datasets show that the proposed method is capable of generalizing to unseen classes and intra-class variabilities.
Closely related to the problem of disentangling factors of variations in representation learning is that of learning fair representations @cite_1 @cite_21 . In particular, the Fair Variational Auto-Encoder @cite_1 aims to learn representations that are invariant to certain nuisance factors of variation, while retaining as much of the remaining information as possible. The authors propose a variant of the VAE that encourages independence between the different latent factors of variation.
{ "cite_N": [ "@cite_21", "@cite_1" ], "mid": [ "2247194987", "2962750142" ], "abstract": [ "In practice, there are often explicit constraints on what representations or decisions are acceptable in an application of machine learning. For example it may be a legal requirement that a decision must not favour a particular group. Alternatively it can be that that representation of data must not have identifying information. We address these two related issues by learning flexible representations that minimize the capability of an adversarial critic. This adversary is trying to predict the relevant sensitive variable from the representation, and so minimizing the performance of the adversary ensures there is little or no information in the representation about the sensitive variable. We demonstrate this adversarial approach on two problems: making decisions free from discrimination and removing private information from images. We formulate the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer. We demonstrate the ability to provide discriminant free representations for standard test problems, and compare with previous state of the art methods for fairness, showing statistically significant improvement across most cases. The flexibility of this method is shown via a novel problem: removing annotations from images, from unaligned training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model.", "Abstract: We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible. Our model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation. Any subsequent processing, such as classification, can then be performed on this purged latent representation. To remove any remaining dependencies we incorporate an additional penalty term based on the \"Maximum Mean Discrepancy\" (MMD) measure. We discuss how these architectures can be efficiently trained on data and show in experiments that this method is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations." ] }
1611.02879
2951700231
In this work, we propose a training algorithm for an audio-visual automatic speech recognition (AV-ASR) system using deep recurrent neural network (RNN).First, we train a deep RNN acoustic model with a Connectionist Temporal Classification (CTC) objective function. The frame labels obtained from the acoustic model are then used to perform a non-linear dimensionality reduction of the visual features using a deep bottleneck network. Audio and visual features are fused and used to train a fusion RNN. The use of bottleneck features for visual modality helps the model to converge properly during training. Our system is evaluated on GRID corpus. Our results show that presence of visual modality gives significant improvement in character error rate (CER) at various levels of noise even when the model is trained without noisy data. We also provide a comparison of two fusion methods: feature fusion and decision fusion.
Fusion methods can be broadly divided into two types @cite_8 @cite_19 : 1. Feature fusion 2. Decision fusion. Feature fusion models perform a low level integration of audio and visual features and this involves a single model which is trained on the fused features. Feature fusion may include a simple concatenation of features or feature weighting and is usually followed by a dimensionality reduction transformation like LDA. Decision fusion is applied in cases where the output classes for the two modalities are same. Various decision fusion methods based on variants of HMMs have been proposed @cite_12 @cite_11 . In Multistream HMM the emission probability of a state of audio-visual system is obtained by a linear combination of log-likelihoods of individual streams for that state. The parameters of HMMs for individual streams can be estimated separately or jointly. While multistream HMM assumes state level synchrony between the two streams, some methods @cite_4 @cite_11 such as coupled HMM @cite_11 allow for asynchrony between two streams. For a detailed survey on HMM based AV-ASR systems we refer the readers to @cite_8 @cite_19
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_19", "@cite_12", "@cite_11" ], "mid": [ "2153219412", "2096391593", "1518556865", "2121486117", "" ], "abstract": [ "Abstract This paper advocates that for some multimodal tasks involving more than one stream of data representing the same sequence of events, it might sometimes be a good idea to be able to desynchronize the streams in order to maximize their joint likelihood. We thus present a novel Hidden Markov Model architecture to model the joint probability of pairs of asynchronous sequences describing the same sequence of events. An Expectation–Maximization algorithm to train the model is presented, as well as a Viterbi decoding algorithm, which can be used to obtain the optimal state sequence as well as the alignment between the two sequences. The model was tested on two audio–visual speech processing tasks, namely speech recognition and text-dependent speaker verification, both using the M2VTS database. Robust performances under various noise conditions were obtained in both cases.", "Visual speech information from the speaker's mouth region has been successfully shown to improve noise robustness of automatic speech recognizers, thus promising to extend their usability in the human computer interface. In this paper, we review the main components of audiovisual automatic speech recognition (ASR) and present novel contributions in two main areas: first, the visual front-end design, based on a cascade of linear image transforms of an appropriate video region of interest, and subsequently, audiovisual speech integration. On the latter topic, we discuss new work on feature and decision fusion combination, the modeling of audiovisual speech asynchrony, and incorporating modality reliability estimates to the bimodal recognition process. We also briefly touch upon the issue of audiovisual adaptation. We apply our algorithms to three multisubject bimodal databases, ranging from small- to large-vocabulary recognition tasks, recorded in both visually controlled and challenging environments. Our experiments demonstrate that the visual modality improves ASR over all conditions and data considered, though less so for visually challenging environments and large vocabulary tasks.", "In this paper, we review recent results on audiovisual (AV) fusion. We also discuss some of the challenges and report on approaches to address them. One important issue in AV fusion is how the modalities interact and influence each other. This review will address this question in the context of AV speech processing, and especially speech recognition, where one of the issues is that the modalities both interact but also sometimes appear to desynchronize from each other. An additional issue that sometimes arises is that one of the modalities may be missing at test time, although it is available at training time; for example, it may be possible to collect AV training data while only having access to audio at test time. We will review approaches to address this issue from the area of multiview learning, where the goal is to learn a model or representation for each of the modalities separately while taking advantage of the rich multimodal training data. In addition to multiview learning, we also discuss the recent application of deep learning (DL) toward AV fusion. We finally draw conclusions and offer our assessment of the future in the area of AV fusion.", "This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56 error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2 error rate and combined noise robust acoustic features and visual features to 2.5 error rate.", "" ] }
1611.02879
2951700231
In this work, we propose a training algorithm for an audio-visual automatic speech recognition (AV-ASR) system using deep recurrent neural network (RNN).First, we train a deep RNN acoustic model with a Connectionist Temporal Classification (CTC) objective function. The frame labels obtained from the acoustic model are then used to perform a non-linear dimensionality reduction of the visual features using a deep bottleneck network. Audio and visual features are fused and used to train a fusion RNN. The use of bottleneck features for visual modality helps the model to converge properly during training. Our system is evaluated on GRID corpus. Our results show that presence of visual modality gives significant improvement in character error rate (CER) at various levels of noise even when the model is trained without noisy data. We also provide a comparison of two fusion methods: feature fusion and decision fusion.
Application of deep learning to multi-modal analyses was presented in @cite_0 which describes multi-modal, cross-modal and shared representation learning and their applications to AV-ASR. In @cite_16 , Deep Belief Networks(DBN) are explored. In @cite_3 the authors train separate networks for audio and visual inputs and fuse the final layers of two networks, and then build a third DNN with the fused features. In addition, @cite_3 presents a new DNN architecture with a bilinear soft-max layer which further improves the performance. In @cite_20 a deep de-noising auto-encoder is used to learn noise robust speech features. The auto-encoder is trained with MFCC features of noisy speech as input and reconstructs clean features. The outputs of final layer of the auto-encoder are used as audio features. A CNN is trained with images from the mouth region as input and phoneme labels as output. The final layers of the two networks are then combined to train a multi-stream HMM.
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_20", "@cite_3" ], "mid": [ "2184188583", "2022799064", "2076462394", "2949547965" ], "abstract": [ "Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.", "Deep belief networks (DBN) have shown impressive improvements over Gaussian mixture models for automatic speech recognition. In this work we use DBNs for audio-visual speech recognition; in particular, we use deep learning from audio and visual features for noise robust speech recognition. We test two methods for using DBNs in a multimodal setting: a conventional decision fusion method that combines scores from single-modality DBNs, and a novel feature fusion method that operates on mid-level features learned by the single-modality DBNs. On a continuously spoken digit recognition task, our experiments show that these methods can reduce word error rate by as much as 21 relative over a baseline multi-stream audio-visual GMM HMM system.", "Audio-visual speech recognition (AVSR) system is thought to be one of the most promising solutions for reliable speech recognition, particularly when the audio is corrupted by noise. However, cautious selection of sensory features is crucial for attaining high recognition performance. In the machine-learning community, deep learning approaches have recently attracted increasing attention because deep neural networks can effectively extract robust latent features that enable various recognition algorithms to demonstrate revolutionary generalization capabilities under diverse application conditions. This study introduces a connectionist-hidden Markov model (HMM) system for noise-robust AVSR. First, a deep denoising autoencoder is utilized for acquiring noise-robust audio features. By preparing the training data for the network with pairs of consecutive multiple steps of deteriorated audio features and the corresponding clean features, the network is trained to output denoised audio features from the corresponding features deteriorated by noise. Second, a convolutional neural network (CNN) is utilized to extract visual features from raw mouth area images. By preparing the training data for the CNN as pairs of raw images and the corresponding phoneme label outputs, the network is trained to predict phoneme labels from the corresponding mouth area input images. Finally, a multi-stream HMM (MSHMM) is applied for integrating the acquired audio and visual HMMs independently trained with the respective features. By comparing the cases when normal and denoised mel-frequency cepstral coefficients (MFCCs) are utilized as audio features to the HMM, our unimodal isolated word recognition results demonstrate that approximately 65 word recognition rate gain is attained with denoised MFCCs under 10 dB signal-to-noise-ratio (SNR) for the audio signal input. Moreover, our multimodal isolated word recognition results utilizing MSHMM with denoised MFCCs and acquired visual features demonstrate that an additional word recognition rate gain is attained for the SNR conditions below 10 dB.", "In this paper, we present methods in deep multimodal learning for fusing speech and visual modalities for Audio-Visual Automatic Speech Recognition (AV-ASR). First, we study an approach where uni-modal deep networks are trained separately and their final hidden layers fused to obtain a joint feature space in which another deep network is built. While the audio network alone achieves a phone error rate (PER) of @math under clean condition on the IBM large vocabulary audio-visual studio dataset, this fusion model achieves a PER of @math demonstrating the tremendous value of the visual channel in phone classification even in audio with high signal to noise ratio. Second, we present a new deep network architecture that uses a bilinear softmax layer to account for class specific correlations between modalities. We show that combining the posteriors from the bilinear networks with those from the fused model mentioned above results in a further significant phone error rate reduction, yielding a final PER of @math ." ] }
1611.03214
2559813832
Convolutional neural networks excel in image recognition tasks, but this comes at the cost of high computational and memory complexity. To tackle this problem, [1] developed a tensor factorization framework to compress fully-connected layers. In this paper, we focus on compressing convolutional layers. We show that while the direct application of the tensor framework [1] to the 4-dimensional kernel of convolution does compress the layer, we can do better. We reshape the convolutional kernel into a tensor of higher order and factorize it. We combine the proposed approach with the previous work to compress both convolutional and fully-connected layers of a network and achieve 80x network compression rate with 1.1 accuracy drop on the CIFAR-10 dataset.
Another approach is to use tensor or matrix decompositions. CP-decomposition @cite_2 and Kronecker product factorization @cite_23 allow to speed up the inference time of convolutions and compress the network as a side effect.
{ "cite_N": [ "@cite_23", "@cite_2" ], "mid": [ "2513928851", "2963674932" ], "abstract": [ "Deep convolutional neural networks achieve better than human level visual recognition accuracy, at the cost of high computational complexity. We propose to factorize the convolutional layers to improve their efficiency. In traditional convolutional layers, the 3D convolution can be considered as performing in-channel spatial convolution and linear channel projection simultaneously, leading to highly redundant computation. By unravelling them apart, the proposed layer only involves single in-channel convolution and linear channel projection. When stacking such layers together, we achieves similar accuracy with significantly less computation. Additionally, we propose a topological connection framework between the input channels and output channels that further improves the layer's efficiency. Our experiments demonstrate that the proposed method remarkably outperforms the standard convolutional layer with regard to accuracy complexity ratio. Our model achieves accuracy of GoogLeNet while consuming 3.4 times less computation.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy." ] }
1611.02776
2563153796
We present an accurate and robust method for six degree of freedom image localization. There are two key-points of our method, 1. automatic immense photo synthesis and labeling from point cloud model and, 2. pose estimation with deep convolutional neural networks regression. Our model can directly regresses 6-DOF camera poses from images, accurately describing where and how it was captured. We achieved an accuracy within 1 meters and 1 degree on our out-door dataset, which covers about 2 acres on our school campus.
Apart from this, some researchers focused on regression methods on RGB-D images. @cite_2 used a regression forest to infer correspondences between each depth image pixels and points in the scene, which was constructed with RGB-D input images. Their method amended the problem of too few 2D-3D matches (in-liners correspondences) in point cloud registration method, as they allow densely or sparsely on points sampling in depth images. @cite_8 introduced a two-way procedure to integrate the random forest regression the distribution generation to improve the generation ability to deal with occlusion. Their accuracy of 6-DOF pose estimation exceeded state-of-art methods on certain public datasets. Methods of this type are usually conducted in small indoor scene and require depth information in training. In contrast, our method only use a monocular camera, available on almost every cell phone, to collect necessary data, and evaluations indicate our method works well on bigger scene.
{ "cite_N": [ "@cite_8", "@cite_2" ], "mid": [ "2396274844", "1989476314" ], "abstract": [ "This work investigates the problem of 6-Degrees-Of-Freedom (6-DOF) object tracking from RGB-D images, where the object is rigid and a 3D model of the object is known. As in many previous works, we utilize a Particle Filter (PF) framework. In order to have a fast tracker, the key aspect is to design a clever proposal distribution which works reliably even with a small number of particles. To achieve this we build on a recently developed state-of-the-art system for single image 6D pose estimation of known 3D objects, using the concept of so-called 3D object coordinates. The idea is to train a random forest that regresses the 3D object coordinates from the RGB-D image. Our key technical contribution is a two-way procedure to integrate the random forest predictions in the proposal distribution generation. This has many practical advantages, in particular better generalization ability with respect to occlusions, changes in lighting and fast-moving objects. We demonstrate experimentally that we exceed state-of-the-art on a given, public dataset. To raise the bar in terms of fast-moving objects and object occlusions, we also create a new dataset, which will be made publicly available.", "We address the problem of inferring the pose of an RGB-D camera relative to a known 3D scene, given only a single acquired image. Our approach employs a regression forest that is capable of inferring an estimate of each pixel's correspondence to 3D points in the scene's world coordinate frame. The forest uses only simple depth and RGB pixel comparison features, and does not require the computation of feature descriptors. The forest is trained to be capable of predicting correspondences at any pixel, so no interest point detectors are required. The camera pose is inferred using a robust optimization scheme. This starts with an initial set of hypothesized camera poses, constructed by applying the forest at a small fraction of image pixels. Preemptive RANSAC then iterates sampling more pixels at which to evaluate the forest, counting inliers, and refining the hypothesized poses. We evaluate on several varied scenes captured with an RGB-D camera and observe that the proposed technique achieves highly accurate relocalization and substantially out-performs two state of the art baselines." ] }
1611.02776
2563153796
We present an accurate and robust method for six degree of freedom image localization. There are two key-points of our method, 1. automatic immense photo synthesis and labeling from point cloud model and, 2. pose estimation with deep convolutional neural networks regression. Our model can directly regresses 6-DOF camera poses from images, accurately describing where and how it was captured. We achieved an accuracy within 1 meters and 1 degree on our out-door dataset, which covers about 2 acres on our school campus.
@cite_7 follows @cite_2 , and used a modified GoogLeNet to regress pose directly. They added another layer of size 2048 before final regression layer, and took advantage of model pre-trained on ImageNet and Flicker datasets. The transfer learning model accelerated the training on camera relocalization, and decreased the number of training samples to use. They reached an accuracy of about @math in position and @math in orientation. We follows the "transfer" idea and extended choice of deep ConvNets to CaffeNet, GoogLeNet and VGG. We applied layer-wise training and fine-tuned with different pre-trained models to reach the best performance. Moreover, we detailed our method of photo synthesis and dataset augmentation in different condition.
{ "cite_N": [ "@cite_7", "@cite_2" ], "mid": [ "2951336016", "1989476314" ], "abstract": [ "We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 6 degree accuracy for large scale outdoor scenes and 0.5m and 10 degree accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show the convnet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples. PoseNet code, dataset and an online demonstration is available on our project webpage, at this http URL", "We address the problem of inferring the pose of an RGB-D camera relative to a known 3D scene, given only a single acquired image. Our approach employs a regression forest that is capable of inferring an estimate of each pixel's correspondence to 3D points in the scene's world coordinate frame. The forest uses only simple depth and RGB pixel comparison features, and does not require the computation of feature descriptors. The forest is trained to be capable of predicting correspondences at any pixel, so no interest point detectors are required. The camera pose is inferred using a robust optimization scheme. This starts with an initial set of hypothesized camera poses, constructed by applying the forest at a small fraction of image pixels. Preemptive RANSAC then iterates sampling more pixels at which to evaluate the forest, counting inliers, and refining the hypothesized poses. We evaluate on several varied scenes captured with an RGB-D camera and observe that the proposed technique achieves highly accurate relocalization and substantially out-performs two state of the art baselines." ] }
1611.03279
2949186219
From a diachronic corpus of Italian, we build consecutive vector spaces in time and use them to compare a term's cosine similarity to itself in different time spans. We assume that a drop in similarity might be related to the emergence of a metaphorical sense at a given time. Similarity-based observations are matched to the actual year when a figurative meaning was documented in a reference dictionary and through manual inspection of corpus occurrences.
The automatic modelling of diachronic shift of meaning has been investigated employing several different techniques. Among these, most recently, Latent Semantic Analysis @cite_6 @cite_9 , topic clustering @cite_10 and dynamic topic modeling @cite_0 . Vector representations for diachronic shift of meaning have been used by , with a simple co-occurence matrix of target words and context terms. and experimented both with a bag-of-words approach and a more linguistically motivated representation that also captures the relative position of lexical items in relation to the target word.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_10", "@cite_6" ], "mid": [ "2307020448", "2113283043", "2063804110", "" ], "abstract": [ "Word meanings change over time and an automated procedure for extracting this information from text would be useful for historical exploratory studies, information retrieval or question answering. We present a dynamic Bayesian model of diachronic meaning change, which infers temporal word representations as a set of senses and their prevalence. Unlike previous work, we explicitly model language change as a smooth, gradual process. We experimentally show that this modeling decision is beneficial: our model performs competitively on meaning change detection tasks whilst inducing discernible word senses and their development over time. Application of our model to the SemEval-2015 temporal classification benchmark datasets further reveals that it performs on par with highly optimized task-specific systems.", "Recently, large amounts of historical texts have been digitized and made accessible to the public. Thanks to this, for the first time, it became possible to analyze evolution of language through the use of automatic approaches. In this paper, we show the results of an exploratory analysis aiming to investigate methods for studying and visualizing changes in word meaning over time. In particular, we propose a framework for exploring semantic change at the lexical level, at the contrastive-pair level, and at the sentiment orientation level. We demonstrate several kinds of NLP approaches that altogether give users deeper understanding of word evolution. We use two diachronic corpora that are currently the largest available historical language corpora. Our results indicate that the task is feasible and satisfactory outcomes can be already achieved by using simple approaches.", "In this paper, we propose to model and analyze changes that occur to an entity in terms of changes in the words that co-occur with the entity over time. We propose to do an in-depth analysis of how this co-occurrence changes over time, how the change influences the state (semantic, role) of the entity, and how the change may correspond to events occurring in the same period of time. We propose to identify clusters of topics surrounding the entity over time using Topics-Over-Time (TOT) and k-means clustering. We conduct this analysis on Google Books Ngram dataset. We show how clustering words that co-occur with an entity of interest in 5-grams can shed some lights to the nature of change that occurs to the entity and identify the period for which the change occurs. We find that the period identified by our model precisely coincides with events in the same period that correspond to the change that occurs.", "" ] }
1611.03279
2949186219
From a diachronic corpus of Italian, we build consecutive vector spaces in time and use them to compare a term's cosine similarity to itself in different time spans. We assume that a drop in similarity might be related to the emergence of a metaphorical sense at a given time. Similarity-based observations are matched to the actual year when a figurative meaning was documented in a reference dictionary and through manual inspection of corpus occurrences.
Recently, Word Embeddings (, see also ) have been used to investigate diachronic meaning shifts: vectors are usually created independently for each time span and then mapped from one year to another via a transformation matrix, thus leveraging the stability of the relative positions of vectors in different spaces @cite_2 @cite_3 @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_3", "@cite_2" ], "mid": [ "2416513196", "2251769296", "250892164" ], "abstract": [ "Understanding how words change their meanings over time is key to models of language and cultural evolution, but historical data on meaning is scarce, making theories hard to develop and test. Word embeddings show promise as a diachronic tool, but have not been carefully evaluated. We develop a robust methodology for quantifying semantic change by evaluating word embeddings (PPMI, SVD, word2vec) against known historical changes. We then use this methodology to reveal statistical laws of semantic evolution. Using six historical corpora spanning four languages and two centuries, we propose two quantitative laws of semantic change: (i) the law of conformity---the rate of semantic change scales with an inverse power-law of word frequency; (ii) the law of innovation---independent of frequency, words that are more polysemous have higher rates of semantic change.", "In the current fast-paced world, people tend to possess limited knowledge about things from the past. For example, some young users may not know that Walkman played similar function as iPod does nowadays. In this paper, we approach the temporal correspondence problem in which, given an input term (e.g., iPod) and the target time (e.g. 1980s), the task is to find the counterpart of the query that existed in the target time. We propose an approach that transforms word contexts across time based on their neural network representations. We then experimentally demonstrate the effectiveness of our method on the New York Times Annotated Corpus.", "This paper presents a novel approach for automatic detection of semantic change of words based on distributional similarity models. We show that the method obtains good results with respect to a reference ranking produced by human raters. The evaluation also analyzes the performance of frequency-based methods, comparing them to the similarity method proposed." ] }
1611.02756
2958065528
Finding dense bipartite subgraphs and detecting the relations among them is an important problem for affiliation networks that arise in a range of domains, such as social network analysis, word-document clustering, the science of science, internet advertising, and bioinformatics. However, most dense subgraph discovery algorithms are designed for classic, unipartite graphs. Subsequently, studies on affiliation networks are conducted on the co-occurrence graphs (e.g., co-author and co-purchase) that project the bipartite structure to a unipartite structure by connecting two entities if they share an affiliation. Despite their convenience, co-occurrence networks come at a cost of loss of information and an explosion in graph sizes, which limit the quality and the efficiency of solutions. We study the dense subgraph discovery problem on bipartite graphs. We define a framework of bipartite subgraphs based on the butterfly motif (2,2-biclique) to model the dense regions in a hierarchical structure. We introduce efficient peeling algorithms to find the dense subgraphs and build relations among them. We can identify denser structures compared to the state-of-the-art algorithms on co-occurrence graphs in real-world data. Our analyses on an author-paper network and a user-product network yield interesting subgraphs and hierarchical relations such as the groups of collaborators in the same institution and spammers that give fake ratings.
Literature on analysis of bipartite graphs has two main thrusts: extending unipartite graph concepts to bipartite graphs and methods to projections to unipartite graphs. Borgatti and Everett @cite_24 redefined centrality and density metrics for bipartite graphs. Robins and Alexander @cite_5 defined the clustering coefficients for bipartite networks. Working on the bipartite network, instead of its projection, is also useful for matrix partitioning @cite_36 and clustering @cite_37 algorithms. As for projections methods, Newman introduced the weighted projection for scientific collaboration networks @cite_35 . Everett and Borgatti proposed to use dual projections @cite_7 , where the idea is to create projections for both set of nodes, and use the resulting one-mode networks for analysis.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_7", "@cite_36", "@cite_24", "@cite_5" ], "mid": [ "2025572017", "2557781680", "2015276481", "2142184646", "2022996068", "2053761479" ], "abstract": [ "", "Driven by the importance of relational aspects of data to decision-making, graph algorithms have been developed, based on simplified pairwise relationships, to solve a variety of problems. However, evidence has shown that hypergraphs—generalizations of graphs with (hyper)edges that connect any number of vertices—can better model complex, non-pairwise relationships in data and lead to better informed decisions. In this work, we compare graph and hypergraph models in the context of spectral clustering. For these problems, we demonstrate that hypergraphs are computationally more efficient and can better model complex, non-pairwise relationships for many datasets.", "Abstract There have been two distinct approaches to two-mode data. The first approach is to project the data to one-mode and then analyze the projected network using standard single-mode techniques, also called the conversion method. The second approach has been to extend methods and concepts to the two-mode case and analyze the network directly with the two modes considered jointly. The direct approach in recent years has been the preferred method since it is assumed that the conversion method loses important structural information. Here we argue that this is not the case, provided both projections are used together in any analysis. We illustrate how this approach works using core periphery, structural equivalence and centrality as examples.", "In this work, we show that the standard graph-partitioning-based decomposition of sparse matrices does not reflect the actual communication volume requirement for parallel matrix-vector multiplication. We propose two computational hypergraph models which avoid this crucial deficiency of the graph model. The proposed models reduce the decomposition problem to the well-known hypergraph partitioning problem. The recently proposed successful multilevel framework is exploited to develop a multilevel hypergraph partitioning tool PaToH for the experimental verification of our proposed hypergraph models. Experimental results on a wide range of realistic sparse test matrices confirm the validity of the proposed hypergraph models. In the decomposition of the test matrices, the hypergraph models using PaToH and hMeTiS result in up to 63 percent less communication volume (30 to 38 percent less on the average) than the graph model using MeTiS, while PaToH is only 1.3-2.3 times slower than MeTiS on the average.", "Network analysis is distinguished from traditional social science by the dyadic nature of the standard data set. Whereas in traditional social science we study monadic attributes of individuals, in network analysis we study dyadic attributes of pairs of individuals. These dyadic attributes (e.g. social relations) may be represented in matrix form by a square 1-mode matrix. In contrast, the data in traditional social science are represented as 2-mode matrices. However, network analysis is not completely divorced from traditional social science, and often has occasion to collect and analyze 2-mode matrices. Furthermore, some of the methods developed in network analysis have uses in analysing non-network data. This paper presents and discusses ways of applying and interpreting traditional network analytic techniques to 2-mode data, as well as developing new techniques. Three areas are covered in detail: displaying 2-mode data as networks, detecting clusters and measuring centrality.", "We describe a methodology to examine bipartite relational data structures as exemplified in networks of corporate interlocking. These structures can be represented as bipartite graphs of directors and companies, but direct comparison of empirical datasets is often problematic because graphs have different numbers of nodes and different densities. We compare empirical bipartite graphs to simulated random graph distributions conditional on constraints implicit in the observed datasets. We examine bipartite graphs directly, rather than simply converting them to two 1-mode graphs, allowing investigation of bipartite statistics important to connection redundancy and bipartite connectivity. We introduce a new bipartite clustering coefficient that measures tendencies for localized bipartite cycles. This coefficient can be interpreted as an indicator of inter-company and inter-director closenesss but high levels of bipartite clustering have a cost for long range connectivity. We also investigate degree distributions, path lengths, and counts of localized subgraphs. Using this new approach, we compare global structural properties of US and Australian interlocking company directors. By comparing observed statistics against those from the simulations, we assess how the observed graphs are structured, and make comparisons between them relative to the simulated graph distributions. We conclude that the two networks share many similarities and some differences. Notably, both structures tend to be influenced by the clustering of directors on boards, more than by the accumulation of board seats by individual directorss that shared multiple board memberships (multiple interlocks) are an important feature of both infrastructures, detracting from global connectivity (but more so in the Australian case)s and that company structural power may be relatively more diffuse in the US structure than in Australia." ] }
1611.02756
2958065528
Finding dense bipartite subgraphs and detecting the relations among them is an important problem for affiliation networks that arise in a range of domains, such as social network analysis, word-document clustering, the science of science, internet advertising, and bioinformatics. However, most dense subgraph discovery algorithms are designed for classic, unipartite graphs. Subsequently, studies on affiliation networks are conducted on the co-occurrence graphs (e.g., co-author and co-purchase) that project the bipartite structure to a unipartite structure by connecting two entities if they share an affiliation. Despite their convenience, co-occurrence networks come at a cost of loss of information and an explosion in graph sizes, which limit the quality and the efficiency of solutions. We study the dense subgraph discovery problem on bipartite graphs. We define a framework of bipartite subgraphs based on the butterfly motif (2,2-biclique) to model the dense regions in a hierarchical structure. We introduce efficient peeling algorithms to find the dense subgraphs and build relations among them. We can identify denser structures compared to the state-of-the-art algorithms on co-occurrence graphs in real-world data. Our analyses on an author-paper network and a user-product network yield interesting subgraphs and hierarchical relations such as the groups of collaborators in the same institution and spammers that give fake ratings.
: Borgatti and Everett proposed biclique to model dense subgraphs @cite_24 which is a complete subgraph between two set of nodes. Kumar al used bicliques of various sizes to analyze web graphs @cite_34 . Enumerating all the maximal bicliques and quasi-cliques is studied by Sim al @cite_15 , and Mukherjee and Tirthapura @cite_20 . However, biclique definition is regarded as too strict, not tolerating even a single missing edge and expensive to compute. More recently, Tsourakakis al @cite_22 used sampling to find @math -biclique densest subgraph in bipartite networks. Main difference of our work is that we do not focus on finding only a single subgraph that is perfectly dense. Instead, we aim to find many dense subgraphs with hierarchical relations.
{ "cite_N": [ "@cite_22", "@cite_24", "@cite_15", "@cite_34", "@cite_20" ], "mid": [ "2054560566", "2022996068", "2053764313", "2151626491", "2764172504" ], "abstract": [ "Extracting dense subgraphs from large graphs is a key primitive in a variety of graph mining applications, ranging from mining social networks and the Web graph to bioinformatics [41]. In this paper we focus on a family of poly-time solvable formulations, known as the k-clique densest subgraph problem (k-Clique-DSP) [57]. When k=2, the problem becomes the well-known densest subgraph problem (DSP) [22, 31, 33, 39]. Our main contribution is a sampling scheme that gives densest subgraph sparsifier, yielding a randomized algorithm that produces high-quality approximations while providing significant speedups and improved space complexity. We also extend this family of formulations to bipartite graphs by introducing the (p,q)-biclique densest subgraph problem ((p,q)-Biclique-DSP), and devise an exact algorithm that can treat both clique and biclique densities in a unified way. As an example of performance, our sparsifying algorithm extracts the 5-clique densest subgraph --which is a large-near clique on 62 vertices-- from a large collaboration network. Our algorithm achieves 100 accuracy over five runs, while achieving an average speedup factor of over 10,000. Specifically, we reduce the running time from ∼2 107 seconds to an average running time of 0.15 seconds. We also use our methods to study how the k-clique densest subgraphs change as a function of time in time-evolving networks for various small values of k. We observe significant deviations between the experimental findings on real-world networks and stochastic Kronecker graphs, a random graph model that mimics real-world networks in certain aspects. We believe that our work is a significant advance in routines with rigorous theoretical guarantees for scalable extraction of large near-cliques from networks.", "Network analysis is distinguished from traditional social science by the dyadic nature of the standard data set. Whereas in traditional social science we study monadic attributes of individuals, in network analysis we study dyadic attributes of pairs of individuals. These dyadic attributes (e.g. social relations) may be represented in matrix form by a square 1-mode matrix. In contrast, the data in traditional social science are represented as 2-mode matrices. However, network analysis is not completely divorced from traditional social science, and often has occasion to collect and analyze 2-mode matrices. Furthermore, some of the methods developed in network analysis have uses in analysing non-network data. This paper presents and discusses ways of applying and interpreting traditional network analytic techniques to 2-mode data, as well as developing new techniques. Three areas are covered in detail: displaying 2-mode data as networks, detecting clusters and measuring centrality.", "Several real-world applications require mining of bicliques, as they represent correlated pairs of data clusters. However, the mining quality is adversely affected by missing and noisy data. Moreover, some applications only require strong interactions between data members of the pairs, but bicliques are pairs that display complete interactions. We address these two limitations by proposing maximal quasi-bicliques. Maximal quasi-bicliques tolerate erroneous and missing data, and also relax the interactions between the data members of their pairs. Besides, maximal quasi-bicliques do not suffer from skewed distribution of missing edges that prior quasi-bicliques have. We develop an algorithm MQBminer, which mines the complete set of maximal quasi-bicliques from either bipartite or non-bipartite graphs. We demonstrate the versatility and effectiveness of maximal quasi-bicliques to discover highly correlated pairs of data in two diverse real-world datasets. First, we propose to solve a novel financial stocks analysis problem using maximal quasi-bicliques to co-cluster stocks and financial ratios. Results show that the stocks in our co-clusters usually have significant correlations in their price performance. Second, we use maximal quasi-bicliques on a mining protein network problem and we show that pairs of protein groups mined by maximal quasi-bicliques are more significant than those mined by maximal bicliques. Copyright © 2009 Wiley Periodicals, Inc., A Wiley Company", "The Web harbors a large number of communities — groups of content-creators sharing a common interest — each of which manifests itself as a set of interlinked Web pages. Newgroups and commercial Web directories together contain of the order of 20,000 such communities; our particular interest here is on emerging communities — those that have little or no representation in such fora. The subject of this paper is the systematic enumeration of over 100,000 such emerging communities from a Web crawl: we call our process trawling. We motivate a graph-theoretic approach to locating such communities, and describe the algorithms, and the algorithmic engineering necessary to find structures that subscribe to this notion, the challenges in handling such a huge data set, and the results of our experiment. © 1999 Published by Elsevier Science B.V. All rights reserved.", "We consider the enumeration of maximal bipartite cliques (bicliques) from a large graph, a task central to many practical data mining problems in social network analysis and bioinformatics. We present novel parallel algorithms for the MapReduce platform, and an experimental evaluation using Hadoop MapReduce. Our algorithm is based on clustering the input graph into smaller sized subgraphs, followed by processing different subgraphs in parallel. Our algorithm uses two ideas that enable it to scale to large graphs: (1) the redundancy in work between different subgraph explorations is minimized through a careful pruning of the search space, and (2) the load on different reducers is balanced through the use of an appropriate total order among the vertices. Our evaluation shows that the algorithm scales to large graphs with millions of edges and tens of millions of maximal bicliques. To our knowledge, this is the first work on maximal biclique enumeration for graphs of this scale." ] }
1611.03006
2952060357
An increasing number of individuals are turning to Direct-To-Consumer (DTC) genetic testing to learn about their predisposition to diseases, traits, and or ancestry. DTC companies like 23andme and this http URL have started to offer popular and affordable ancestry and genealogy tests, with services allowing users to find unknown relatives and long-distant cousins. Naturally, access and possible dissemination of genetic data prompts serious privacy concerns, thus motivating the need to design efficient primitives supporting private genetic tests. In this paper, we present an effective protocol for privacy-preserving genetic relatedness test (PPGRT), enabling a cloud server to run relatedness tests on input an encrypted genetic database and a test facility's encrypted genetic sample. We reduce the test to a data matching problem and perform it, privately, using searchable encryption. Finally, a performance evaluation of hamming distance based PP-GRT attests to the practicality of our proposals.
Two recent works @cite_21 @cite_14 open a new perspective for privacy-friendly GRT by using fuzzy encryption technique. In these systems, each individual first compresses its haplotype into a 0 1 string, called private genome sketch, and then encrypts'' the sketch by using a random row of a given error correct code matrix. One may detect if user @math is relative by downloading @math 's encrypted sketch, and next decrypting'' the sketch with its own private genome sketch. If the decryption closely leads to a row in the matrix, the haplotypes of both individuals are approximately matched. However, all the aforementioned computation approaches do not really scale well in practice, as they all suffer from an important limitation, i.e., the client is burdened with heavy computation and communication overhead, as it has to download all related encrypted'' results from the server and perform a huge numbers of decryption to identify the relatedness. How to design scalable privacy-preserving GRT that can scale on both server and client side constitutes the main motivation for our work.
{ "cite_N": [ "@cite_14", "@cite_21" ], "mid": [ "2105420280", "2104064051" ], "abstract": [ "Aspects of the invention include determining relatedness between genomes without compromising privacy. In one aspect, secure genome sketches of genomes can be made publicly available without compromising privacy. These are compared to privately held (unsecured) genome sketches to determine relatedness.", "Motivation: High-throughput sequencing technologies have impacted many areas of genetic research. One such area is the identification of relatives from genetic data. The standard approach for the identification of genetic relatives collects the genomic data of all individuals and stores it in a database. Then, each pair of individuals is compared to detect the set of genetic relatives, and the matched individuals are informed. The main drawback of this approach is the requirement of sharing your genetic data with a trusted third party to perform the relatedness test. Results: In this work, we propose a secure protocol to detect the genetic relatives from sequencing data while not exposing any information about their genomes. We assume that individuals have access to their genome sequences but do not want to share their genomes with anyone else. Unlike previous approaches, our approach uses both common and rare variants which provide the ability to detect much more distant relationships securely. We use a simulated data generated from the 1000 genomes data and illustrate that we can easily detect up to fifth degree cousins which was not possible using the existing methods. We also show in the 1000 genomes data with cryptic relationships that our method can detect these individuals. Availability: The software is freely available for download at http: genetics.cs.ucla.edu crypto . Contact: ude.alcu.sc@zomrohf or ude.alcu.sc@niksee Supplementary information: Supplementary data are available at Bioinformatics online" ] }
1611.02767
2564477780
Neural networks have shown to be a practical way of building a very complex mapping between a pre-specified input space and output space. For example, a convolutional neural network (CNN) mapping an image into one of a thousand object labels is approaching human performance in this particular task. However the mapping (neural network) does not automatically lend itself to other forms of queries, for example, to detect reconstruct object instances, to enforce top-down signal on ambiguous inputs, or to recover object instances from occlusion. One way to address these queries is a backward pass through the network that fuses top-down and bottom-up information. In this paper, we show a way of building such a backward pass by defining a generative model of the neural network's activations. Approximate inference of the model would naturally take the form of a backward pass through the CNN layers, and it addresses the aforementioned queries in a unified framework.
@cite_14 showed that a CNN can be trained to generate 3D object instances and properly relate their variations in the hidden space. However it is not clear how to perform inference given a cluttered scene. A main difference between our approach and @cite_14 is that we constrain the top-down activations to be similar to bottom-up activations from a pre-trained CNN, making it possible to perform efficient inference by combining bottom-up and top-down information at each layer. The hierarchical mixture model formulation also allows our model to automatically discover attributes without pre-specifying them as in @cite_14 .
{ "cite_N": [ "@cite_14" ], "mid": [ "1893585201" ], "abstract": [ "We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task." ] }
1611.02767
2564477780
Neural networks have shown to be a practical way of building a very complex mapping between a pre-specified input space and output space. For example, a convolutional neural network (CNN) mapping an image into one of a thousand object labels is approaching human performance in this particular task. However the mapping (neural network) does not automatically lend itself to other forms of queries, for example, to detect reconstruct object instances, to enforce top-down signal on ambiguous inputs, or to recover object instances from occlusion. One way to address these queries is a backward pass through the network that fuses top-down and bottom-up information. In this paper, we show a way of building such a backward pass by defining a generative model of the neural network's activations. Approximate inference of the model would naturally take the form of a backward pass through the CNN layers, and it addresses the aforementioned queries in a unified framework.
Our approach provides an alternative view of the attention'' mechanism in a visual hierarchy---we attend to different objects in a scene by selecting modes in the posterior distribution. This is different from that of end-to-end trainable RNN-based approaches @cite_12 @cite_18 @cite_5 @cite_8 . The main difference is that our approach is more flexible in incorporating additional top-down signals, whereas the RNN-based approaches provide no easy access to its decision making process. In other words, it is not clear how to handle top-down queries such as , , or without being re-trained end-to-end in each case. We will discuss this in more details in Sec. .
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_12", "@cite_8" ], "mid": [ "2951527505", "1484210532", "1850742715", "2327562811" ], "abstract": [ "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.", "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.", "We present a framework for efficient inference in structured image models that explicitly reason about objects. We achieve this by performing probabilistic inference using a recurrent neural network that attends to scene elements and processes them one at a time. Crucially, the model itself learns to choose the appropriate number of inference steps. We use this scheme to learn to perform inference in partially specified 2D models (variable-sized variational auto-encoders) and fully specified 3D models (probabilistic renderers). We show that such models learn to identify multiple objects - counting, locating and classifying the elements of a scene - without any supervision, e.g., decomposing 3D images with various numbers of objects in a single forward pass of a neural network. We further show that the networks produce accurate inferences when compared to supervised counterparts, and that their structure leads to improved generalization." ] }
1611.02853
2564904618
An effective packet processing abstraction that leverages software or hardware acceleration techniques can simplify the implementation of high-performance virtual network functions. In this paper, we explore the suitability of SDN switches' stateful forwarding abstractions to model accelerated functions in both software and hardware accelerators, such as optimized software switches and FPGA-based NICs. In particular, we select an Extended Finite State Machine abstraction and demonstrate its suitability by implementing the Linux's iptables interface. By doing so, we provide the acceleration of functions such as stateful firewalls, load balancers and dynamic NATs. We find that supporting a flow-level programming consistency model is an important feature of a programming abstraction in this context. Furthermore, we demonstrate that such a model simplifies the scaling of the system when implemented in software, enabling efficient multi-core processing without harming state consistency.
Click @cite_8 adopts a model in which arbitrary functional blocks, called elements, can be composed into graphs to implement a network function. ClickNP @cite_43 uses the Click's abstraction but adds the possibility to implement some elements as hardware functions to be run on, e.g., a smart NIC. However, in SoftFlow, Click and ClickNP actions or elements implementation is still a complex task, which has to be performed if there are no pre-implemented modules that meet the developer's need. To address this issue, NetBricks @cite_5 defines as abstraction a set of more fine-granular primitives that combined can describe a large number of software network functions. The primitives implementation is optimized, therefore functions expressed using the NetBricks' abstraction provide high-performance. In this sense, our approach is similar, since we adopt the set of fine-granular MAT-based OPP functions to describe network functions. Still, the approaches differ in flexibility and hardware support. NetBricks is more flexible and expressive, but targets pure software functions. Our OPP-based solution can express only network functions that deal with packet headers, but provides full hardware support.
{ "cite_N": [ "@cite_43", "@cite_5", "@cite_8" ], "mid": [ "", "2579461576", "2156874421" ], "abstract": [ "", "The move from hardware middleboxes to software network functions, as advocated by NFV, has proven more challenging than expected. Developing new NFs remains a tedious process, requiring that developers repeatedly rediscover and reapply the same set of optimizations, while current techniques for providing isolation between NFs (using VMs or containers) incur high performance overheads. In this paper we describe NetBricks, a new NFV framework that tackles both these problems. For building NFs we take inspiration from modern data analytics frameworks (e.g., Spark and Dryad) and build a small set of customizable network processing elements. We also embrace type checking and safe runtimes to provide isolation in software, rather than rely on hardware isolation. NetBricks provides the same memory isolation as containers and VMs, without incurring the same performance penalties. To improve I O efficiency, we introduce a novel technique called zero-copy software isolation.", "Click is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements. Individual elements implement simple router functions like packet classification, queueing, scheduling, and interfacing with network devices. Complete configurations are built by connecting elements into a graph; packets flow along the graph's edges. Several features make individual elements more powerful and complex configurations easier to write, including pull processing, which models packet flow driven by transmitting interfaces, and flow-based router context, which helps an element locate other interesting elements.We demonstrate several working configurations, including an IP router and an Ethernet bridge. These configurations are modular---the IP router has 16 elements on the forwarding path---and easy to extend by adding additional elements, which we demonstrate with augmented configurations. On commodity PC hardware running Linux, the Click IP router can forward 64-byte packets at 73,000 packets per second, just 10 slower than Linux alone." ] }
1611.02853
2564904618
An effective packet processing abstraction that leverages software or hardware acceleration techniques can simplify the implementation of high-performance virtual network functions. In this paper, we explore the suitability of SDN switches' stateful forwarding abstractions to model accelerated functions in both software and hardware accelerators, such as optimized software switches and FPGA-based NICs. In particular, we select an Extended Finite State Machine abstraction and demonstrate its suitability by implementing the Linux's iptables interface. By doing so, we provide the acceleration of functions such as stateful firewalls, load balancers and dynamic NATs. We find that supporting a flow-level programming consistency model is an important feature of a programming abstraction in this context. Furthermore, we demonstrate that such a model simplifies the scaling of the system when implemented in software, enabling efficient multi-core processing without harming state consistency.
FlexNIC @cite_48 envisions the support of RMT in future NICs, providing a way to execute the RMT-based processing while exchanging packets between the NIC and the host's memory. We consider such work orthogonal to our contribution, since the NIC could use an OPP-like processing model instead of one based on P4.
{ "cite_N": [ "@cite_48" ], "mid": [ "2225993331" ], "abstract": [ "We propose FlexNIC, a flexible network DMA interface that can be used by operating systems and applications alike to reduce packet processing overheads. The recent surge of network I O performance has put enormous pressure on memory and software I O processing subsystems. Yet even at high speeds, flexibility in packet handling is still important for security, performance isolation, and virtualization. Thus, our proposal moves some of the packet processing traditionally done in software to the NIC DMA controller, where it can be done flexibly and at high speed. We show how FlexNIC can benefit widely used data center server applications, such as key-value stores." ] }
1611.02853
2564904618
An effective packet processing abstraction that leverages software or hardware acceleration techniques can simplify the implementation of high-performance virtual network functions. In this paper, we explore the suitability of SDN switches' stateful forwarding abstractions to model accelerated functions in both software and hardware accelerators, such as optimized software switches and FPGA-based NICs. In particular, we select an Extended Finite State Machine abstraction and demonstrate its suitability by implementing the Linux's iptables interface. By doing so, we provide the acceleration of functions such as stateful firewalls, load balancers and dynamic NATs. We find that supporting a flow-level programming consistency model is an important feature of a programming abstraction in this context. Furthermore, we demonstrate that such a model simplifies the scaling of the system when implemented in software, enabling efficient multi-core processing without harming state consistency.
. An extensive comparison of software accelerated capturing techniques can be found in @cite_0 @cite_42 @cite_1 . Relevant software accelerated engines are PF @cite_9 , PF ZC (Zero Copy) @cite_52 , Netmap @cite_21 , DPDK @cite_25 and PFQ @cite_32 . PF ZC, Netmap and DPDK bypass the Operating System by memory mapping the ring descriptors of NICs at user space, allowing even a single CPU to receive 64 bytes long packets up to full 10 Gbps line speed. In addition, DPDK adds a set of libraries for fast packet processing on multicore architectures for Linux. Netmap and DPDK have been successfully used in accelerating soft switch as in the case of the VALE @cite_24 switch and mSwitch @cite_49 (netmap) and CuckooSwitch @cite_60 and DPDK vSwitch @cite_12 (DPDK). Netmap was also used to accelerate packet forwarding in Click @cite_6 . PFQ, instead, relies on vanilla device drivers and leverages different levels of parallelism to accelerate packet I O. In addition, PFQ is equipped with a native functional language to program in--kernel early stage packet processing @cite_3 .
{ "cite_N": [ "@cite_60", "@cite_9", "@cite_42", "@cite_1", "@cite_52", "@cite_21", "@cite_32", "@cite_0", "@cite_24", "@cite_6", "@cite_3", "@cite_49", "@cite_25", "@cite_12" ], "mid": [ "1989728020", "", "1496406348", "2044119008", "", "1633306738", "", "2106061258", "2146314716", "2032228686", "2104367435", "2013342629", "", "" ], "abstract": [ "Several emerging network trends and new architectural ideas are placing increasing demand on forwarding table sizes. From massive-scale datacenter networks running millions of virtual machines to flow-based software-defined networking, many intriguing design options require FIBs that can scale well beyond the thousands or tens of thousands possible using today's commodity switching chips. This paper presents CuckooSwitch, a software-based Ethernet switch design built around a memory-efficient, high-performance, and highly-concurrent hash table for compact and fast FIB lookup. We show that CuckooSwitch can process 92.22 million minimum-sized packets per second on a commodity server equipped with eight 10 Gbps Ethernet interfaces while maintaining a forwarding table of one billion forwarding entries. This rate is the maximum packets per second achievable across the underlying hardware's PCI buses.", "", "Users' demands have dramatically increased due to widespread availability of broadband access and new Internet avenues for accessing, sharing and working with information. In response, operators have upgraded their infrastructures to survive in a market as mature as the current Internet. This has meant that most network processing tasks (e.g., routing, anomaly detection, monitoring) must deal with challenging rates, challenges traditionally accomplished by specialized hardware—e.g., FPGA. However, such approaches lack either flexibility or extensibility—or both. As an alternative, the research community has proposed the utilization of commodity hardware providing flexible and extensible cost-aware solutions, thus entailing lower operational and capital expenditure investments. In this scenario, we explain how the arrival of commodity packet engines has revolutionized the development of traffic processing tasks. Thanks to the optimization of both NIC drivers and standard network stacks and by exploiting concepts such as parallelism and memory affinity, impressive packet capture rates can be achieved in hardware valued at a few thousand dollars. This tutorial explains the foundation of this new paradigm, i.e., the knowledge required to capture packets at multi-Gb s rates on commodity hardware. Furthermore, we thoroughly explain and empirically compare current proposals, and importantly explain how apply such proposals with a number of code examples. Finally, we review successful use cases of applications developed over these novel engines.", "Network stacks currently implemented in operating systems can no longer cope with the packet rates offered by 10 Gbit Ethernet. Thus, frameworks were developed claiming to offer a faster alternative for this demand. These frameworks enable arbitrary packet processing systems to be built from commodity hardware handling a traffic rate of several 10 Gbit interfaces, entering a domain previously only available to custom-built hardware. In this paper, we survey various frameworks for high-performance packet IO. We analyze the performance of the most prominent frameworks based on representative measurements in packet forwarding scenarios. Therefore, we quantify the effects of caching and look at the tradeoff between throughput and latency. Moreover, we introduce a model to estimate and assess the performance of these packet processing frameworks.", "", "", "", "Capturing network traffic with commodity hardware has become a feasible task: Advances in hardware as well as soft- ware have boosted off-the-shelf hardware to performance levels that some years ago were the domain of expensive special-purpose hardware. However, the capturing hardware still needs to be driven by a well-performing software stack in order to minimise or avoid packet loss. Improving the capturing stack of Linux and FreeBSD has been an extensively covered research topic in the past years. Although the majority of the proposed enhancements have been backed by evaluations, these have mostly been conducted on different hardware platforms and software versions, which renders a comparative assessment of the various approaches difficult, if not impossible. This paper summarises and evaluates the performance of current packet capturing solutions based on commodity hardware. We identify bottlenecks and pitfalls within the capturing stack of FreeBSD and Linux, and give explanations for the observed effects. Based on our experiments, we provide guidelines for users on how to configure their capturing systems for optimal performance and we also give hints on debugging bad performance. Furthermore, we propose improvements to the operating system's capturing processes that reduce packet loss, and evaluate their impact on capturing performance.", "The growing popularity of virtual machines is pushing the demand for high performance communication between them. Past solutions have seen the use of hardware assistance, in the form of \"PCI passthrough\" (dedicating parts of physical NICs to each virtual machine) and even bouncing traffic through physical switches to handle data forwarding and replication. In this paper we show that, with a proper design, very high speed communication between virtual machines can be achieved completely in software. Our architecture, called VALE, implements a Virtual Local Ethernet that can be used by virtual machines, such as QEMU, KVM and others, as well as by regular processes. VALE achieves a throughput of over 17 million packets per second (Mpps) between host processes, and over 2 Mpps between QEMU instances, without any hardware assistance. VALE is available for both FreeBSD and Linux hosts, and is implemented as a kernel module that extends our recently proposed netmap framework, and uses similar techniques to achieve high packet rates.", "Software packet forwarding has been used for a long time in general purpose operating systems. While interesting for prototyping or on slow links, it is not considered a viable solution at very high packet rates, where various sources of overhead (particularly, the packet I O mechanisms) get in the way of achieving good performance.", "Today's rapidly evolving network ecosystem, characterized by increasing traffic volumes, service heterogeneity and mutating cyber-threats, calls for new approaches to packet processing to address key issues such as scalability, flexibility, programmability and fast deployment. To this aim, this paper explores a new direction to packet processing by pushing forward functional programming principles in the definition of a ''software defined networking'' paradigm. This result is achieved by introducing PFQ-Lang, an extensible functional language which can be used to process, analyze and forward packets captured on modern multi-queue NICs (for example, it allows to quickly develop the early stage of monitoring applications). An implementation of PFQ-Lang, embedded into high level programming languages as an eDSL (embedded Domain Specific Language) is also presented. The proposed approach allows an easy development by leveraging the intuitive functional composition and, at the same time, allows to exploit multi-queue NICs and multi-core architectures to process high-speed network traffic. Experimental results are provided to prove that the presented implementation reaches line rate performance on a 10Gb line card. To demonstrate the effectiveness and expressiveness of PFQ-Lang, the paper also presents a few use-cases ranging from forwarding, firewalling and monitoring of real traffic.", "In recent years software network switches have regained eminence as a result of a number of growing trends, including the prominence of software-defined networks, as well as their use as back-ends to virtualization technologies, to name a few. Consequently, a number of high performance switches have been recently proposed in the literature, though none of these simultaneously provide (1) high packet rates, (2) high throughput, (3) low CPU usage, (4) high port density and (5) a flexible data plane. This is not by chance: these features conflict, and while achieving one or a few of them is (now) a solved problem, addressing the combination requires significant new design effort. In this paper we fill the gap by presenting mSwitch. To prove the flexibility and performance of our approach, we use mSwitch to build four distinct modules: a learning bridge consisting of 45 lines of code that outperforms FreeBSD's bridge by up to 8 times; an accelerated Open vSwitch module requiring small changes to the code and boosting performance by 2.6--3 times; a protocol demultiplexer for userspace protocol stacks; and a filtering module that can direct packets to virtualized middleboxes.", "", "" ] }