aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1412.1123 | 2949192504 | Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection. We achieve this goal by means of a multi-scale deep network that consists of five convolutional layers and a bifurcated fully-connected sub-network. The section from the input layer to the fifth convolutional layer is fixed and directly lifted from a pre-trained network optimized over a large-scale object classification task. This section of the network is applied to four different scales of the image input. These four parallel and identical streams are then attached to a bifurcated sub-network consisting of two independently-trained branches. One branch learns to predict the contour likelihood (with a classification objective) whereas the other branch is trained to learn the fraction of human labelers agreeing about the contour presence at a given point (with a regression criterion). We show that without any feature engineering our multi-scale deep learning approach achieves state-of-the-art results in contour detection. | In the recent years, deep convolutional networks have achieved remarkable results in a wide array of computer vision tasks @cite_9 @cite_13 @cite_32 @cite_2 . However, thus far, applications of convolutional networks focused on high-level vision tasks such as face recognition, image classification, pose estimation or scene labeling @cite_9 @cite_13 @cite_32 @cite_2 . Excellent results in these tasks beg the question whether convolutional networks could perform equally well in lower-level vision tasks such as contour detection. In this paper, we present a convolutional architecture that achieves state-of-the-art results in a contour detection task, thus demonstrating that convolutional networks can be applied successfully for lower-level vision tasks as well. | {
"cite_N": [
"@cite_9",
"@cite_13",
"@cite_32",
"@cite_2"
],
"mid": [
"2145287260",
"2951277909",
"2113325037",
""
],
"abstract": [
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.",
"Scene parsing is a technique that consist on giving a label to all pixels in an image according to the class they belong to. To ensure a good visual coherence and a high class accuracy, it is essential for a scene parser to capture image long range dependencies. In a feed-forward architecture, this can be simply achieved by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach consisting of a recurrent convolutional neural network which allows us to consider a large input context, while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation methods, nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.",
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
""
]
} |
1412.1123 | 2949192504 | Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection. We achieve this goal by means of a multi-scale deep network that consists of five convolutional layers and a bifurcated fully-connected sub-network. The section from the input layer to the fifth convolutional layer is fixed and directly lifted from a pre-trained network optimized over a large-scale object classification task. This section of the network is applied to four different scales of the image input. These four parallel and identical streams are then attached to a bifurcated sub-network consisting of two independently-trained branches. One branch learns to predict the contour likelihood (with a classification objective) whereas the other branch is trained to learn the fraction of human labelers agreeing about the contour presence at a given point (with a regression criterion). We show that without any feature engineering our multi-scale deep learning approach achieves state-of-the-art results in contour detection. | Most of the contour detection methods can be divided into two branches: local and global methods. Local methods perform contour detection by reasoning about small patches inside the image. Some recent local methods include sketch tokens @cite_20 and structured edges @cite_11 , Both of these methods are trained in a supervised fashion using a random forest classifier. Sketch tokens @cite_20 pose contour detection as a multi-class classification task and predicts a label for each of the pixels individually. Structured edges @cite_11 , on the other hand, attempt to predict the labels of multiple pixels simultaneously. | {
"cite_N": [
"@cite_20",
"@cite_11"
],
"mid": [
"2151049637",
"2129587342"
],
"abstract": [
"We propose a novel approach to both learning and detecting local contour-based representations for mid-level features. Our features, called sketch tokens, are learned using supervised mid-level information in the form of hand drawn contours in images. Patches of human generated contours are clustered to form sketch token classes and a random forest classifier is used for efficient detection in novel images. We demonstrate our approach on both top-down and bottom-up tasks. We show state-of-the-art results on the top-down task of contour detection while being over 200x faster than competing methods. We also achieve large improvements in detection accuracy for the bottom-up tasks of pedestrian and object detection as measured on INRIA and PASCAL, respectively. These gains are due to the complementary information provided by sketch tokens to low-level features such as gradient histograms.",
"Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains real time performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets."
]
} |
1412.1123 | 2949192504 | Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection. We achieve this goal by means of a multi-scale deep network that consists of five convolutional layers and a bifurcated fully-connected sub-network. The section from the input layer to the fifth convolutional layer is fixed and directly lifted from a pre-trained network optimized over a large-scale object classification task. This section of the network is applied to four different scales of the image input. These four parallel and identical streams are then attached to a bifurcated sub-network consisting of two independently-trained branches. One branch learns to predict the contour likelihood (with a classification objective) whereas the other branch is trained to learn the fraction of human labelers agreeing about the contour presence at a given point (with a regression criterion). We show that without any feature engineering our multi-scale deep learning approach achieves state-of-the-art results in contour detection. | Global methods predict contours based on the information from the full image. Some of the most successful approaches in this genre are the MCG detector @cite_7 , gPb detector @cite_1 and sparse code gradients @cite_0 . While sparse code gradients use supervised SVM learning @cite_18 , both gPb and MCG rely on some form of spectral methods. Other spectral-based methods include Normalized Cuts @cite_22 and PMI @cite_33 . | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_33",
"@cite_1",
"@cite_0"
],
"mid": [
"2139212933",
"2121947440",
"1991367009",
"105270443",
"2110158442",
"2165140157"
],
"abstract": [
"The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.",
"We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"Detecting boundaries between semantically meaningful objects in visual scenes is an important component of many vision algorithms. In this paper, we propose a novel method for detecting such boundaries based on a simple underlying principle: pixels belonging to the same object exhibit higher statistical dependencies than pixels belonging to different objects. We show how to derive an affinity measure based on this principle using pointwise mutual information, and we show that this measure is indeed a good predictor of whether or not two pixels reside on the same object. Using this affinity with spectral clustering, we can find object boundaries in the image – achieving state-of-the-art results on the BSDS500 dataset. Our method produces pixel-level accurate boundaries while requiring minimal feature engineering.",
"This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.",
"Finding contours in natural images is a fundamental problem that serves as the basis of many tasks such as image segmentation and object recognition. At the core of contour detection technologies are a set of hand-designed gradient features, used by most approaches including the state-of-the-art Global Pb (gPb) operator. In this work, we show that contour detection accuracy can be significantly improved by computing Sparse Code Gradients (SCG), which measure contrast using patch representations automatically learned through sparse coding. We use K-SVD for dictionary learning and Orthogonal Matching Pursuit for computing sparse codes on oriented local neighborhoods, and apply multi-scale pooling and power transforms before classifying them with linear SVMs. By extracting rich representations from pixels and avoiding collapsing them prematurely, Sparse Code Gradients effectively learn how to measure local contrasts and find contours. We improve the F-measure metric on the BSDS500 benchmark to 0.74 (up from 0.71 of gPb contours). Moreover, our learning approach can easily adapt to novel sensor data such as Kinect-style RGB-D cameras: Sparse Code Gradients on depth maps and surface normals lead to promising contour detection using depth and depth+color, as verified on the NYU Depth Dataset."
]
} |
1412.1123 | 2949192504 | Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection. We achieve this goal by means of a multi-scale deep network that consists of five convolutional layers and a bifurcated fully-connected sub-network. The section from the input layer to the fifth convolutional layer is fixed and directly lifted from a pre-trained network optimized over a large-scale object classification task. This section of the network is applied to four different scales of the image input. These four parallel and identical streams are then attached to a bifurcated sub-network consisting of two independently-trained branches. One branch learns to predict the contour likelihood (with a classification objective) whereas the other branch is trained to learn the fraction of human labelers agreeing about the contour presence at a given point (with a regression criterion). We show that without any feature engineering our multi-scale deep learning approach achieves state-of-the-art results in contour detection. | Recently, there have also been attempts to apply deep learning methods to the task of contour detection. While SCT @cite_12 is a sparse coding approach, both @math fields @cite_17 and DeepNet @cite_30 use Convolutional Neural Networks (CNNs) to predict contours. @math fields rely on dictionary learning and the use of the Nearest Neighbor algorithm within a CNN framework while DeepNet uses a traditional CNN architecture to predict contours. | {
"cite_N": [
"@cite_30",
"@cite_12",
"@cite_17"
],
"mid": [
"2172014587",
"97134437",
""
],
"abstract": [
"This paper investigates visual boundary detection, i.e. prediction of the presence of a boundary at a given image location. We develop a novel neurally-inspired deep architecture for the task. Notable aspects of our work are (i) the use of features\" [Ranzato and Hinton, 2010] which depend on the squared response of a lter to the input image, and (ii) the integration of image information from multiple scales and semantic levels via multiple streams of interlinked, layered, and non-linear \" processing. Our results on the Berkeley Segmentation Data Set 500 (BSDS500) show comparable or better performance to the topperforming methods [, 2011, Ren and Bo, 2012, , 2013, Doll ar and Zitnick, 2013] with eective inference times. We also propose novel quantitative assessment techniques for improved method understanding and comparison. We carefully dissect the performance of our architecture, feature-types used and training methods, providing clear signals for model understanding and development.",
"We frame the task of predicting a semantic labeling as a sparse reconstruction procedure that applies a target-specific learned transfer function to a generic deep sparse code representation of an image. This strategy partitions training into two distinct stages. First, in an unsupervised manner, we learn a set of dictionaries optimized for sparse coding of image patches. These generic dictionaries minimize error with respect to representing image appearance and are independent of any particular target task. We train a multilayer representation via recursive sparse dictionary learning on pooled codes output by earlier layers. Second, we encode all training images with the generic dictionaries and learn a transfer function that optimizes reconstruction of patches extracted from annotated ground-truth given the sparse codes of their corresponding image patches. At test time, we encode a novel image using the generic dictionaries and then reconstruct using the transfer function. The output reconstruction is a semantic labeling of the test image.",
""
]
} |
1412.1454 | 2145543707 | We present a novel family of language model (LM) estimation techniques named Sparse Non-negative Matrix (SNM) estimation. A first set of experiments empirically evaluating it on the On e Billion Word Benchmark [, 2013] shows that SNM n-gram LMs perform almost as well as the well-established Kneser-Ney (KN) models. When using skip-gram features the models are able to match the state-of-the-art recurrent neural network (RNN) LMs; combining the two modeling techniques yields the best known result on the benchmark. The computational advantages of SNM over both maximum entropy and RNN LM estimation are probably its main strength, promising an approach that has the same flexibility in combining arbitrary feature s effectively and yet should scale to very large amounts of data as gracefully as n-gram LMs do. | SNM estimation is closely related to all @math -gram LM smoothing techniques that rely on mixing relative frequencies at various orders. Unlike most of those, it combines the predictors at various orders without relying on a hierarchical nesting of the contexts, setting it closer to the family of maximum entropy (ME) @cite_13 , or exponential models. | {
"cite_N": [
"@cite_13"
],
"mid": [
"1590952807"
],
"abstract": [
"Abstract : Language modeling is the attempt to characterize, capture and exploit regularities in natural language. In statistical language modeling, large amounts of text are used to automatically determine the model's parameters. Language modeling is useful in automatic speech recognition, machine translation, and any other application that processes natural language with incomplete knowledge. In this thesis, I view language as an information source which emits a stream of symbols from a finite alphabet (the vocabulary). The goal of language modeling is then to identify and exploit sources of information in the language stream, so as to minimize its perceived entropy. Most existing statistical language models exploit the immediate past only. To extract information from further back in the document's history, I use trigger pairs as the basic information bearing elements. This allows the model to adapt its expectations to the topic of discourse. Next, statistical evidence from many sources must be combined. Traditionally, linear interpolation and its variants have been used, but these are shown here to be seriously deficient. Instead, I apply the principle of Maximum Entropy (ME). Each information source gives rise to a set of constraints, to be imposed on the combined estimate. The intersection of these constraints is the set of probability functions which are consistent with all the information sources. The function with the highest entropy within that set is the NE solution. Language modeling, Adaptive language modeling, Statistical language modeling, Maximum entropy, Speech recognition."
]
} |
1412.1454 | 2145543707 | We present a novel family of language model (LM) estimation techniques named Sparse Non-negative Matrix (SNM) estimation. A first set of experiments empirically evaluating it on the On e Billion Word Benchmark [, 2013] shows that SNM n-gram LMs perform almost as well as the well-established Kneser-Ney (KN) models. When using skip-gram features the models are able to match the state-of-the-art recurrent neural network (RNN) LMs; combining the two modeling techniques yields the best known result on the benchmark. The computational advantages of SNM over both maximum entropy and RNN LM estimation are probably its main strength, promising an approach that has the same flexibility in combining arbitrary feature s effectively and yet should scale to very large amounts of data as gracefully as n-gram LMs do. | We are not the first ones to highlight the effectiveness of skip @math -grams at capturing dependencies across longer contexts, similar to RNN LMs; previous such results were reported in @cite_1 . | {
"cite_N": [
"@cite_1"
],
"mid": [
"2078549297"
],
"abstract": [
"Recurrent neural networks (RNNs) are a very recent technique to model long range dependencies in natural languages. They have clearly outperformed trigrams and other more advanced language modeling techniques by using non-linearly modeling long range dependencies. An alternative is to use log-linear interpolation of skip models (i.e. skip bigrams and skip trigrams). The method as such has been published earlier. In this paper we investigate the impact of different smoothing techniques on the skip models as a measure of their overall performance. One option is to use automatically trained distance clusters (both hard and soft) to increase robustness and to combat sparseness in the skip model. We also investigate alternative smoothing techniques on word level. For skip bigrams when skipping a small number of words Kneser-Ney smoothing (KN) is advantageous. For a larger number of words being skipped Dirichlet smoothing performs better. In order to exploit the advantages of both KN and Dirichlet smoothing we propose a new unified smoothing technique. Experiments are performed on four Babel languages: Cantonese, Pashto, Tagalog and Turkish. RNNs and log-linearly interpolated skip models are on par if the skip models are trained with standard smoothing techniques. Using the improved smoothing of the skip models along with distance clusters, we can clearly outperform RNNs by about 8-11 in perplexity across all four languages."
]
} |
1412.1454 | 2145543707 | We present a novel family of language model (LM) estimation techniques named Sparse Non-negative Matrix (SNM) estimation. A first set of experiments empirically evaluating it on the On e Billion Word Benchmark [, 2013] shows that SNM n-gram LMs perform almost as well as the well-established Kneser-Ney (KN) models. When using skip-gram features the models are able to match the state-of-the-art recurrent neural network (RNN) LMs; combining the two modeling techniques yields the best known result on the benchmark. The computational advantages of SNM over both maximum entropy and RNN LM estimation are probably its main strength, promising an approach that has the same flexibility in combining arbitrary feature s effectively and yet should scale to very large amounts of data as gracefully as n-gram LMs do. | @cite_9 attempts to capture long range dependencies in language where the skip @math -grams are identified using a left-to-right syntactic parser. Approaches such as @cite_3 leverage latent semantic information, whereas @cite_2 integrates both syntactic and topic-based modeling in a unified approach. | {
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_2"
],
"mid": [
"1989705153",
"2118714763",
"2024592335"
],
"abstract": [
"This paper presents an attempt at using the syntactic structure in natural language for improved language models for speech recognition. The structured language model merges techniques in automatic parsing and language modeling using an original probabilistic parameterization of a shift-reduce parser. A maximum likelihood re-estimation procedure belonging to the class of expectation-maximization algorithms is employed for training the model. Experiments on the Wall Street Journal and Switchboard corpora show improvement in both perplexity and word error rate?word lattice rescoring?over the standard 3-gram language model.",
"Statistical language models used in large-vocabulary speech recognition must properly encapsulate the various constraints, both local and global, present in the language. While local constraints are readily captured through n-gram modeling, global constraints, such as long-term semantic dependencies, have been more difficult to handle within a data-driven formalism. This paper focuses on the use of latent semantic analysis, a paradigm that automatically uncovers the salient semantic relationships between words and documents in a given corpus. In this approach, (discrete) words and documents are mapped onto a (continuous) semantic vector space, in which familiar clustering techniques can be applied. This leads to the specification of a powerful framework for automatic semantic classification, as well as the derivation of several language model families with various smoothing properties. Because of their large-span nature, these language models are well suited to complement conventional n-grams. An integrative formulation is proposed for harnessing this synergy, in which the latent semantic information is used to adjust the standard n-gram probability. Such hybrid language modeling compares favorably with the corresponding n-gram baseline: experiments conducted on the Wall Street Journal domain show a reduction in average word error rate of over 20 . This paper concludes with a discussion of intrinsic tradeoffs, such as the influence of training data selection on the resulting performance.",
"This paper presents an attempt at building a large scale distributed composite language model that is formed by seamlessly integrating an n-gram model, a structured language model, and probabilistic latent semantic analysis under a directed Markov random field paradigm to simultaneously account for local word lexical information, mid-range sentence syntactic structure, and long-span document semantic content. The composite language model has been trained by performing a convergent N-best list approximate EM algorithm and a follow-up EM algorithm to improve word prediction power on corpora with up to a billion tokens and stored on a supercomputer. The large scale distributed composite language model gives drastic perplexity reduction over n-grams and achieves significantly better translation quality measured by the Bleu score and \"readability\" of translations when applied to the task of re-ranking the N-best list from a state-of-the-art parsing-based machine translation system."
]
} |
1412.1454 | 2145543707 | We present a novel family of language model (LM) estimation techniques named Sparse Non-negative Matrix (SNM) estimation. A first set of experiments empirically evaluating it on the On e Billion Word Benchmark [, 2013] shows that SNM n-gram LMs perform almost as well as the well-established Kneser-Ney (KN) models. When using skip-gram features the models are able to match the state-of-the-art recurrent neural network (RNN) LMs; combining the two modeling techniques yields the best known result on the benchmark. The computational advantages of SNM over both maximum entropy and RNN LM estimation are probably its main strength, promising an approach that has the same flexibility in combining arbitrary feature s effectively and yet should scale to very large amounts of data as gracefully as n-gram LMs do. | The speed-ups to ME, and RNN LM training provided by hierarchically predicting words at the output layer @cite_14 , and subsampling @cite_10 still require updates that are linear in the vocabulary size times the number of words in the training data, whereas the SNM updates in Eq. ) for the much smaller adjustment function eliminate the dependency on the vocabulary size. Scaling up RNN LM training is described in @cite_6 and @cite_11 . | {
"cite_N": [
"@cite_14",
"@cite_10",
"@cite_6",
"@cite_11"
],
"mid": [
"2100714283",
"115367774",
"1539309091",
"2951793508"
],
"abstract": [
"Maximum entropy models are considered by many to be one of the most promising avenues of language modeling research. Unfortunately, long training times make maximum entropy research difficult. We present a speedup technique: we change the form of the model to use classes. Our speedup works by creating two maximum entropy models, the first of which predicts the class of each word, and the second of which predicts the word itself. This factoring of the model leads to fewer nonzero indicator functions, and faster normalization, achieving speedups of up to a factor of 35 over one of the best previous techniques. It also results in typically slightly lower perplexities. The same trick can be used to speed training of other machine learning techniques, e.g. neural networks, applied to any problem with a large number of outputs, such as language modeling.",
"We propose an efficient way to train maximum entropy language models (MELM) and neural network language models (NNLM). The advantage of the proposed method comes from a more robust and efficient subsampling technique. The original multi-class language modeling problem is transformed into a set of binary problems where each binary classifier predicts whether or not a particular word will occur. We show that the binarized model is as powerful as the standard model and allows us to aggressively subsample negative training examples without sacrificing predictive performance. Empirical results show that we can train MELM and NNLM at 1 5 of the standard complexity with no loss in performance.",
"We propose a new benchmark corpus to be used for measuring progress in statistical language modeling. With almost one billion words of training data, we hope this benchmark will be useful to quickly evaluate novel language modeling techniques, and to compare their contribution when combined with other advanced techniques. We show performance of several well-known types of language models, with the best results achieved with a recurrent neural network based language model. The baseline unpruned KneserNey 5-gram model achieves perplexity 67.6. A combination of techniques leads to 35 reduction in perplexity, or 10 reduction in cross-entropy (bits), over that baseline. The benchmark is available as a code.google.com project; besides the scripts needed to rebuild the training held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the baseline n-gram models.",
"This paper investigates the scaling properties of Recurrent Neural Network Language Models (RNNLMs). We discuss how to train very large RNNs on GPUs and address the questions of how RNNLMs scale with respect to model size, training-set size, computational costs and memory. Our analysis shows that despite being more costly to train, RNNLMs obtain much lower perplexities on standard benchmarks than n-gram models. We train the largest known RNNs and present relative word error rates gains of 18 on an ASR task. We also present the new lowest perplexities on the recently released billion word language modelling benchmark, 1 BLEU point gain on machine translation and a 17 relative hit rate gain in word prediction."
]
} |
1412.1526 | 2952435733 | This paper presents an approach to parsing humans when there is significant occlusion. We model humans using a graphical model which has a tree structure building on recent work [32, 6] and exploit the connectivity prior that, even in presence of occlusion, the visible nodes form a connected subtree of the graphical model. We call each connected subtree a flexible composition of object parts. This involves a novel method for learning occlusion cues. During inference we need to search over a mixture of different flexible models. By exploiting part sharing, we show that this inference can be done extremely efficiently requiring only twice as many computations as searching for the entire object (i.e., not modeling occlusion). We evaluate our model on the standard benchmarked "We Are Family" Stickmen dataset and obtain significant performance improvements over the best alternative algorithms. | Graphical models of objects have a long history @cite_5 @cite_29 . Our work is most closely related to the recent work of Yang and Ramanan @cite_38 , Chen and Yuille @cite_33 , which we use as our base model and will compare to. Other relevant work includes @cite_6 @cite_27 @cite_37 @cite_13 . | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_33",
"@cite_29",
"@cite_6",
"@cite_27",
"@cite_5",
"@cite_13"
],
"mid": [
"2013640163",
"",
"2155394491",
"2030536784",
"2097151019",
"",
"2045798786",
""
],
"abstract": [
"We describe a method for articulated human detection and human pose estimation in static images based on a new representation of deformable part models. Rather than modeling articulation using a family of warped (rotated and foreshortened) templates, we use a mixture of small, nonoriented parts. We describe a general, flexible mixture model that jointly captures spatial relations between part locations and co-occurrence relations between part mixtures, augmenting standard pictorial structure models that encode just spatial relations. Our models have several notable properties: 1) They efficiently model articulation by sharing computation across similar warps, 2) they efficiently model an exponentially large set of global mixtures through composition of local mixtures, and 3) they capture the dependency of global geometry on local appearance (parts look different at different locations). When relations are tree structured, our models can be efficiently optimized with dynamic programming. We learn all parameters, including local appearances, spatial relations, and co-occurrence relations (which encode local rigidity) with a structured SVM solver. Because our model is efficient enough to be used as a detector that searches over scales and image locations, we introduce novel criteria for evaluating pose estimation and human detection, both separately and jointly. We show that currently used evaluation criteria may conflate these two issues. Most previous approaches model limbs with rigid and articulated templates that are trained independently of each other, while we present an extensive diagnostic evaluation that suggests that flexible structure and joint training are crucial for strong performance. We present experimental results on standard benchmarks that suggest our approach is the state-of-the-art system for pose estimation, improving past work on the challenging Parse and Buffy datasets while being orders of magnitude faster.",
"",
"We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.",
"In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.",
"In this paper we consider the challenging problem of articulated human pose estimation in still images. We observe that despite high variability of the body articulations, human motions and activities often simultaneously constrain the positions of multiple body parts. Modelling such higher order part dependencies seemingly comes at a cost of more expensive inference, which resulted in their limited use in state-of-the-art methods. In this paper we propose a model that incorporates higher order part dependencies while remaining efficient. We achieve this by defining a conditional model in which all body parts are connected a-priori, but which becomes a tractable tree-structured pictorial structures model once the image observations are available. In order to derive a set of conditioning variables we rely on the poselet-based features that have been shown to be effective for people detection but have so far found limited application for articulated human pose estimation. We demonstrate the effectiveness of our approach on three publicly available pose estimation benchmarks improving or being on-par with state of the art in each case.",
"",
"The primary problem dealt with in this paper is the following. Given some description of a visual object, find that object in an actual photograph. Part of the solution to this problem is the specification of a descriptive scheme, and a metric on which to base the decision of \"goodness\" of matching or detection.",
""
]
} |
1412.1526 | 2952435733 | This paper presents an approach to parsing humans when there is significant occlusion. We model humans using a graphical model which has a tree structure building on recent work [32, 6] and exploit the connectivity prior that, even in presence of occlusion, the visible nodes form a connected subtree of the graphical model. We call each connected subtree a flexible composition of object parts. This involves a novel method for learning occlusion cues. During inference we need to search over a mixture of different flexible models. By exploiting part sharing, we show that this inference can be done extremely efficiently requiring only twice as many computations as searching for the entire object (i.e., not modeling occlusion). We evaluate our model on the standard benchmarked "We Are Family" Stickmen dataset and obtain significant performance improvements over the best alternative algorithms. | Occlusion modeling also has a long history @cite_25 @cite_17 . Psychophysical studies ( , Kanizsa @cite_30 ) show that T-junctions are a useful cue for occlusion. But there has been little attempt to model the spatial patterns of occlusions for parsing objects. Instead it is more common to design models so that they are robust in the presence of occlusion, so that the model is not penalized very much if an object part is missing. Girshick et. al. @cite_10 and Supervised-DPM @cite_18 model the occluded part (background) using extra templates. And they rely on a root part ( , the holistic object) that never takes the status of occluded". When there is significant occlusion, modeling the root part itself is difficult. Ghiasi et. al. @cite_3 advocates modeling the occlusion area (background) using more templates (mixture of templates), and localizes every body parts. It may be plausible to guess" the occluded keypoints of face ( , @cite_8 @cite_9 ), but seems impossible for body parts of people, due to highly flexible human poses. Eichner and Ferrari @cite_7 handles occlusion by modeling interactions between people, which assumes the occlusion is due to other people. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_10",
"@cite_25",
"@cite_17"
],
"mid": [
"1545423298",
"2005264304",
"1585462596",
"2111372597",
"",
"2071300943",
"2153185908",
"2000723188",
""
],
"abstract": [
"",
"The presence of occluders significantly impacts performance of systems for object recognition. However, occlusion is typically treated as an unstructured source of noise and explicit models for occluders have lagged behind those for object appearance and shape. In this paper we describe a hierarchical deformable part model for face detection and keypoint localization that explicitly models occlusions of parts. The proposed model structure makes it possible to augment positive training data with large numbers of synthetically occluded instances. This allows us to easily incorporate the statistics of occlusion patterns in a discriminatively trained model. We test the model on several benchmarks for keypoint localization including challenging sets featuring significant occlusion. We find that the addition of an explicit model of occlusion yields a system that outperforms existing approaches in keypoint localization accuracy.",
"We present a novel multi-person pose estimation framework, which extends pictorial structures (PS) to explicitly model interactions between people and to estimate their poses jointly. Interactions are modeled as occlusions between people. First, we propose an occlusion probability predictor, based on the location of persons automatically detected in the image, and incorporate the predictions as occlusion priors into our multi-person PS model. Moreover, our model includes an inter-people exclusion penalty, preventing body parts from different people from occupying the same image region. Thanks to these elements, our model has a global view of the scene, resulting in better pose estimates in group photos, where several persons stand nearby and occlude each other. In a comprehensive evaluation on a new, challenging group photo datasets we demonstrate the benefits of our multi-person model over a state-of-the-art single-person pose estimator which treats each person independently.",
"Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80 40 precision recall.",
"",
"Occlusion poses a significant difficulty for object recognition due to the combinatorial diversity of possible occlusion patterns. We take a strongly supervised, non-parametric approach to modeling occlusion by learning deformable models with many local part mixture templates using large quantities of synthetically generated training data. This allows the model to learn the appearance of different occlusion patterns including figure-ground cues such as the shapes of occluding contours as well as the co-occurrence statistics of occlusion between neighboring parts. The underlying part mixture-structure also allows the model to capture coherence of object support masks between neighboring parts and make compelling predictions of figure-ground-occluder segmentations. We test the resulting model on human pose estimation under heavy occlusion and find it produces improved localization accuracy.",
"Compositional models provide an elegant formalism for representing the visual appearance of highly variable objects. While such models are appealing from a theoretical point of view, it has been difficult to demonstrate that they lead to performance advantages on challenging datasets. Here we develop a grammar model for person detection and show that it outperforms previous high-performance systems on the PASCAL benchmark. Our model represents people using a hierarchy of deformable parts, variable structure and an explicit model of occlusion for partially visible objects. To train the model, we introduce a new discriminative framework for learning structured prediction models from weakly-labeled data.",
"We present a unified occlusion model for object instance detection under arbitrary viewpoint. Whereas previous approaches primarily modeled local coherency of occlusions or attempted to learn the structure of occlusions from data, we propose to explicitly model occlusions by reasoning about 3D interactions of objects. Our approach accurately represents occlusions under arbitrary viewpoint without requiring additional training data, which can often be difficult to obtain. We validate our model by incorporating occlusion reasoning with the state-of-the-art LINE2D and Gradient Network methods for object instance detection and demonstrate significant improvement in recognizing texture-less objects under severe occlusions.",
""
]
} |
1412.1526 | 2952435733 | This paper presents an approach to parsing humans when there is significant occlusion. We model humans using a graphical model which has a tree structure building on recent work [32, 6] and exploit the connectivity prior that, even in presence of occlusion, the visible nodes form a connected subtree of the graphical model. We call each connected subtree a flexible composition of object parts. This involves a novel method for learning occlusion cues. During inference we need to search over a mixture of different flexible models. By exploiting part sharing, we show that this inference can be done extremely efficiently requiring only twice as many computations as searching for the entire object (i.e., not modeling occlusion). We evaluate our model on the standard benchmarked "We Are Family" Stickmen dataset and obtain significant performance improvements over the best alternative algorithms. | Our approach models object occlusion effectively uses a mixture of models to deal with different occlusion patterns. There is considerable work which models objects using mixtures to deal with different configurations, see Poselets @cite_19 which uses many mixtures to deal with different object configurations, and deformable part models (DPMs) @cite_11 where mixtures are used to deal with different viewpoints. | {
"cite_N": [
"@cite_19",
"@cite_11"
],
"mid": [
"2535410496",
"2168356304"
],
"abstract": [
"We address the classic problems of detection, segmentation and pose estimation of people in images with a novel definition of a part, a poselet. We postulate two criteria (1) It should be easy to find a poselet given an input image (2) it should be easy to localize the 3D configuration of the person conditioned on the detection of a poselet. To permit this we have built a new dataset, H3D, of annotations of humans in 2D photographs with 3D joint information, inferred using anthropometric constraints. This enables us to implement a data-driven search procedure for finding poselets that are tightly clustered in both 3D joint configuration space as well as 2D image appearance. The algorithm discovers poselets that correspond to frontal and profile faces, pedestrians, head and shoulder views, among others. Each poselet provides examples for training a linear SVM classifier which can then be run over the image in a multiscale scanning mode. The outputs of these poselet detectors can be thought of as an intermediate layer of nodes, on top of which one can run a second layer of classification or regression. We show how this permits detection and localization of torsos or keypoints such as left shoulder, nose, etc. Experimental results show that we obtain state of the art performance on people detection in the PASCAL VOC 2007 challenge, among other datasets. We are making publicly available both the H3D dataset as well as the poselet parameters for use by other researchers.",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function."
]
} |
1412.1526 | 2952435733 | This paper presents an approach to parsing humans when there is significant occlusion. We model humans using a graphical model which has a tree structure building on recent work [32, 6] and exploit the connectivity prior that, even in presence of occlusion, the visible nodes form a connected subtree of the graphical model. We call each connected subtree a flexible composition of object parts. This involves a novel method for learning occlusion cues. During inference we need to search over a mixture of different flexible models. By exploiting part sharing, we show that this inference can be done extremely efficiently requiring only twice as many computations as searching for the entire object (i.e., not modeling occlusion). We evaluate our model on the standard benchmarked "We Are Family" Stickmen dataset and obtain significant performance improvements over the best alternative algorithms. | To ensure efficient inference, we exploit the fact that parts are shared between different flexible compositions. This sharing of parts has been used in other work, e.g., @cite_36 . Other work that exploits part sharing includes compositional models @cite_21 and AND-OR graphs @cite_32 @cite_34 . | {
"cite_N": [
"@cite_36",
"@cite_21",
"@cite_32",
"@cite_34"
],
"mid": [
"2104408738",
"2004876626",
"347936517",
""
],
"abstract": [
"Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples of highly deformable objects), ii) describe them in terms of body parts, and iii) detect them when their body parts are hard to detect (e.g., animals depicted at low resolution). We represent the holistic object and body parts separately and use a fully connected model to arrange templates for the holistic object and body parts. Our model automatically decouples the holistic object or body parts from the model when they are hard to detect. This enables us to represent a large number of holistic object and body part combinations to better deal with different \"detectability\" patterns caused by deformations, occlusion and or low resolution. We apply our method to the six animal categories in the PASCAL VOC dataset and show that our method significantly improves state-of-the-art (by 4.1 AP) and provides a richer representation for objects. During training we use annotations for body parts (e.g., head, torso, etc.), making use of a new dataset of fully annotated object parts for PASCAL VOC 2010, which provides a mask for each part.",
"We propose Recursive Compositional Models (RCMs) for simultaneous multi-view multi-object detection and parsing (e.g. view estimation and determining the positions of the object subparts). We represent the set of objects by a family of RCMs where each RCM is a probability distribution defined over a hierarchical graph which corresponds to a specific object and viewpoint. An RCM is constructed from a hierarchy of subparts subgraphs which are learnt from training data. Part-sharing is used so that different RCMs are encouraged to share subparts subgraphs which yields a compact representation for the set of objects and which enables efficient inference and learning from a limited number of training samples. In addition, we use appearance-sharing so that RCMs for the same object, but different viewpoints, share similar appearance cues which also helps efficient learning. RCMs lead to a multi-view multi-object detection system. We illustrate RCMs on four public datasets and achieve state-of-the-art performance.",
"We present a novel structure learning method,Max Margin AND OR Graph (MM-AOG), for parsing the human body into parts and recovering their poses. Our method represents the human body and its parts by an AND OR graph, which is a multi-level mixture of Markov Random Fields (MRFs). Max margin learning, which is a generalization of the training algorithm for support vector machines (SVMs), is used to learn the parameters of the AND OR graph model discriminatively. There are four advantages from this combination of AND OR graphs and max-margin learning. Firstly, the AND OR graph allows us to handle enormous articulated poses with a compact graphical model. Secondly, max-margin learning has more discriminative power than the traditional maximum likelihood approach. Thirdly, the parameters of the AND OR graph model are optimized globally. In particular, the weights of the appearancemodel for individual nodes and the relative importance of spatial relationships between nodes are learnt simultaneously. Finally, the kernel trick can be used to handle high dimensional features and to enable complex similarity measure of shapes. We perform comparison experiments on the baseball datasets, showing significant improvements over state of the art methods.",
""
]
} |
1412.1424 | 1987876819 | People regularly share items using online social media. However, people's decisions around sharing---who shares what to whom and why---are not well understood. We present a user study involving 87 pairs of Facebook users to understand how people make their sharing decisions. We find that even when sharing to a specific individual, people's own preference for an item (individuation) dominates over the recipient's preferences (altruism). People's open-ended responses about how they share, however, indicate that they do try to personalize shares based on the recipient. To explain these contrasting results, we propose a novel process model of sharing that takes into account people's preferences and the salience of an item. We also present encouraging results for a sharing prediction model that incorporates both the senders' and the recipients' preferences. These results suggest improvements to both algorithms that support sharing in social media and to information diffusion models. | When communicating with others, people customize their message based on their estimation of the audience's knowledge or attitudes @cite_26 . At the same time, much word-of-mouth sharing is driven by people's desire to share items that closely align with or enhance their self-image @cite_7 @cite_37 . Our first question addreses how people balance customization with self-expression. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_7"
],
"mid": [
"1994324267",
"2123535355",
"1988987806"
],
"abstract": [
"This research examined the relation between self-relevance and word-of-mouth (WOM). The results of two studies suggest consumers are more likely to provide WOM for products that are relevant to self-concept than for more utilitarian products. There was also some indication that WOM was biased, in the sense that consumers exaggerated the benefits of self-relevant products compared to utilitarian products. Finally, self-relevance had a greater impact on WOM in individualist cultures than collectivist cultures, consistent with differences in the way self-concept is typically construed by these groups. Implications for marketing strategies concerning WOM are discussed.",
"We review several studies examining perspective-taking in communication. One set of studies indicates that speakers exploit the common ground they share with their addressees in creating referring expressions and that such perspective-taking improves the listener's comprehension. A second set of studies examines an element of the perspective-taking process itself: the accuracy of people's assessments of others' knowledge. We find that such estimates are both fairly accurate and biased in the direction of the perceiver's own knowledge. However, the extent of their influence on message formulation depends on the availability of feedback. We conclude that perspective-taking in communication combines prior theories about what others know with information drawn from such conversational resources as verbal and nonverbal feedback.",
"ABSTRACTMarketers have long understood that consumers’ self-concepts influence the products they purchase; conversely, products purchased influence people’s self-concepts. Might the same self-enhancement framework apply in to shared online advertisements? Using the symbolic interactionist perspective of identity theory, this study empirically tests the proposition that online consumers use electronic word of mouth, and specifically the sharing of online advertising, to construct and express their self-concepts. The results suggest that self-brand congruity, entertainment value, and product category involvement increase the self-expressiveness of online ads, which then increase the likelihood of sharing those ads. These findings have both theoretical and managerial implications."
]
} |
1412.1424 | 1987876819 | People regularly share items using online social media. However, people's decisions around sharing---who shares what to whom and why---are not well understood. We present a user study involving 87 pairs of Facebook users to understand how people make their sharing decisions. We find that even when sharing to a specific individual, people's own preference for an item (individuation) dominates over the recipient's preferences (altruism). People's open-ended responses about how they share, however, indicate that they do try to personalize shares based on the recipient. To explain these contrasting results, we propose a novel process model of sharing that takes into account people's preferences and the salience of an item. We also present encouraging results for a sharing prediction model that incorporates both the senders' and the recipients' preferences. These results suggest improvements to both algorithms that support sharing in social media and to information diffusion models. | This dual motivation between and has also been found to drive sharing activities in online contexts. Research in online word-of-mouth referrals has shown individuation and altruism as two dominant motivations for sharing @cite_17 @cite_27 @cite_30 . Studies of knowledge sharing in online professional communities reveal a similar pattern: people share knowledge to enhance their professional reputation or when they enjoy helping others @cite_36 . When sharing items such as movies, these two motivations of individuation and altruism can be mapped to sharing based on one's own interests or the audience's. These, in turn, can be estimated from the rich preference data available online, then used to study the relative influence of these factors. | {
"cite_N": [
"@cite_30",
"@cite_27",
"@cite_36",
"@cite_17"
],
"mid": [
"2417628925",
"1550522855",
"1565831494",
"2166033352"
],
"abstract": [
"",
"The Internet makes it possible for consumers to obtain electronic word of mouth from other consumers. Customer comments articulated via the Internet are available to a vast number of other customers, and therefore can be expected to have a significant impact on the success of goods and services. This paper derives several motives that explain why customers retrieve other customers' on-line articulations from Web-based consumer opinion platforms. The relevance of these motives and their impact on consumer buying and communication behavior are tested in a large-scale empirical study. The results illustrate that consumers read on-line articulations mainly to save decision-making time and make better buying decisions. Structural equation modeling shows that their motives for retrieving on-line articulations strongly influence their behavior.",
"Electronic networks of practice are computer-mediated discussion forums focused on problems of practice that enable individuals to exchange advice and ideas with others based on common interests. However, why individuals help strangers in these electronic networks is not well understood: there is no immediate benefit to the contributor, and free-riders are able to acquire the same knowledge as everyone else. To understand this paradox, we apply theories of collective action to examine how individual motivations and social capital influence knowledge contribution in electronic networks. This study reports on the activities of one electronic network supporting a professional legal association. Using archival, network, survey, and content analysis data, we empirically test a model of knowledge contribution. We find that people contribute their knowledge when they perceive that it enhances their professional reputations, when they have the experience to share, and when they are structurally embedded in the network. Surprisingly, contributions occur without regard to expectations of reciprocity from others or high levels of commitment to the network.",
"Despite the increasing popularity of viral marketing, factors critical to such a new communication medium remain largely unknown. This paper examines one of the critical factors, namely Internet users' motivations to pass along online content. Conceptualizing the act of forwarding online content as a special case of a more general communication behavior, we identify four potential motivations: (1) the need to be part of a group, (2) the need to be individualistic, (3) the need to be altruistic, and (4) the need for personal growth. Using a survey of young adults, we examine the relationship between these motivations and the frequency of passing along online content. We also investigate if high trait curiosity can indirectly lead to more forwarding by increasing the amount of online content consumed. Results show that Internet users, who are more individualistic and or more altruistic, tend to forward more online content than others."
]
} |
1412.1424 | 1987876819 | People regularly share items using online social media. However, people's decisions around sharing---who shares what to whom and why---are not well understood. We present a user study involving 87 pairs of Facebook users to understand how people make their sharing decisions. We find that even when sharing to a specific individual, people's own preference for an item (individuation) dominates over the recipient's preferences (altruism). People's open-ended responses about how they share, however, indicate that they do try to personalize shares based on the recipient. To explain these contrasting results, we propose a novel process model of sharing that takes into account people's preferences and the salience of an item. We also present encouraging results for a sharing prediction model that incorporates both the senders' and the recipients' preferences. These results suggest improvements to both algorithms that support sharing in social media and to information diffusion models. | Studying sharing behavior also allows us to ask how well people can recommend content for others; such directed suggestions can provide a useful complement to algorithmically-generated recommendations @cite_29 . Most studies in this space have focused on the question of influence, using recipients' acceptance of recommendations as a proxy for how influential the sender is. For example, network influence @cite_6 @cite_16 , the relationship with the sender @cite_18 @cite_28 , the explanation accompanying the content @cite_38 @cite_10 @cite_15 , and the susceptibility of an individual towards shared items @cite_9 have all been shown to affect people's likelihood of accepting suggestions. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_28",
"@cite_29",
"@cite_9",
"@cite_6",
"@cite_15",
"@cite_16",
"@cite_10"
],
"mid": [
"1750205245",
"2033198212",
"2027135291",
"2156939823",
"2003741412",
"2042123098",
"2124489423",
"",
"2129971321"
],
"abstract": [
"Recommender systems associated with social networks often use social explanations (e.g. \"X, Y and 2 friends like this\") to support the recommendations. We present a study of the effects of these social explanations in a music recommendation context. We start with an experiment with 237 users, in which we show explanations with varying levels of social information and analyze their effect on users' decisions. We distinguish between two key decisions: the likelihood of checking out the recommended artist, and the actual rating of the artist based on listening to several songs. We find that while the explanations do have some influence on the likelihood, there is little correlation between the likelihood and actual (listening) rating for the same artist. Based on these insights, we present a generative probabilistic model that explains the interplay between explanations and background information on music preferences, and how that leads to a final likelihood rating for an artist. Acknowledging the impact of explanations, we discuss a general recommendation framework that models external informational elements in the recommendation interface, in addition to inherent preferences of users.",
"Online social networks are everywhere. They must be influencing the way society is developing, but hard evidence is scarce. For instance, the relative effectiveness of online friendships and face-to-face friendships as drivers of social change is not known. In what may be the largest experiment ever conducted with human subjects, James Fowler and colleagues randomly assigned messages to 61 million Facebook users on Election Day in the United States in 2010, and tracked their behaviour both online and offline, using publicly available records. The results show that the messages influenced the political communication, information-seeking and voting behaviour of millions of people. Social messages had more impact than informational messages and 'weak ties' were much less likely than 'strong ties' to spread behaviour via the social network. Thus online mobilization works primarily through strong-tie networks that may exist offline but have an online representation.",
"Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed.",
"To find interesting, personally relevant web content, people rely on friends and colleagues to pass links along as they encounter them. In this paper, we study and augment link-sharing via e-mail, the most popular means of sharing web content today. Armed with survey data indicating that active sharers of novel web content are often those that actively seek it out, we developed FeedMe, a plug-in for Google Reader that makes directed sharing of content a more salient part of the user experience. FeedMe recommends friends who may be interested in seeing content that the user is viewing, provides information on what the recipient has seen and how many emails they have received recently, and gives recipients the opportunity to provide lightweight feedback when they appreciate shared content. FeedMe introduces a novel design space within mixed-initiative social recommenders: friends who know the user voluntarily vet the material on the user's behalf. We performed a two-week field experiment (N=60) and found that FeedMe made it easier and more enjoyable to share content that recipients appreciated and would not have found otherwise.",
"Methods, systems, and apparatuses, including computer programs encoded on computer readable media, for generating a message associated with a user, wherein the user is associated with a plurality of peers in a social network. A subset of peers is randomly chosen from the plurality of peers. The message is sent to the subset of peers. Data pertaining to one or more behaviors from one or more peers of the plurality of peers is collected. A time for a target behavior is evaluated as a function of who received the message and who did not receive the message. From the evaluation, particular members of the social network are identified.",
"One of the major applications of data mining is in helping companies determine which potential customers to market to. If the expected profit from a customer is greater than the cost of marketing to her, the marketing action for that customer is executed. So far, work in this area has considered only the intrinsic value of the customer (i.e, the expected profit from sales to her). We propose to model also the customer's network value: the expected profit from sales to other customers she may influence to buy, the customers those may influence, and so on recursively. Instead of viewing a market as a set of independent entities, we view it as a social network and model it as a Markov random field. We show the advantages of this approach using a social network mined from a collaborative filtering database. Marketing that exploits the network value of customers---also known as viral marketing---can be extremely effective, but is still a black art. Our work can be viewed as a step towards providing a more solid foundation for it, taking advantage of the availability of large relevant databases.",
"As news reading becomes more social, how do different types of annotations affect people's selection of news articles? This paper reports on results from two experiments looking at social annotations in two different news reading contexts. The first experiment simulates a logged-out experience with annotations from strangers, a computer agent, and a branded company. Results indicate that, perhaps unsurprisingly, annotations by strangers have no persuasive effects. However, surprisingly, unknown branded companies still had a persuasive effect. The second experiment simulates a logged-in experience with annotations from friends, finding that friend annotations are both persuasive and improve user satisfaction over their article selections. In post-experiment interviews, we found that this increased satisfaction is due partly because of the context that annotations add. That is, friend annotations both help people decide what to read, and provide social context that improves engagement. Interviews also suggest subtle expertise effects. We discuss implications for design of social annotation systems and suggestions for future research.",
"",
"Most models of social contagion take peer exposure to be a corollary of adoption, yet in many settings, the visibility of one's adoption behavior happens through a separate decision process. In online systems, product designers can define how peer exposure mechanisms work: adoption behaviors can be shared in a passive, automatic fashion, or occur through explicit, active sharing. The consequences of these mechanisms are of substantial practical and theoretical interest: passive sharing may increase total peer exposure but active sharing may expose higher quality products to peers who are more likely to adopt. We examine selection effects in online sharing through a large-scale field experiment on Facebook that randomizes whether or not adopters share Offers (coupons) in a passive manner. We derive and estimate a joint discrete choice model of adopters' sharing decisions and their peers' adoption decisions. Our results show that active sharing enables a selection effect that exposes peers who are more likely to adopt than the population exposed under passive sharing. We decompose the selection effect into two distinct mechanisms: active sharers expose peers to higher quality products, and the peers they share with are more likely to adopt independently of product quality. Simulation results show that the user-level mechanism comprises the bulk of the selection effect. The study's findings are among the first to address downstream peer effects induced by online sharing mechanisms, and can inform design in settings where a surplus of sharing could be viewed as costly."
]
} |
1412.1424 | 1987876819 | People regularly share items using online social media. However, people's decisions around sharing---who shares what to whom and why---are not well understood. We present a user study involving 87 pairs of Facebook users to understand how people make their sharing decisions. We find that even when sharing to a specific individual, people's own preference for an item (individuation) dominates over the recipient's preferences (altruism). People's open-ended responses about how they share, however, indicate that they do try to personalize shares based on the recipient. To explain these contrasting results, we propose a novel process model of sharing that takes into account people's preferences and the salience of an item. We also present encouraging results for a sharing prediction model that incorporates both the senders' and the recipients' preferences. These results suggest improvements to both algorithms that support sharing in social media and to information diffusion models. | However, little is known about whether people make suggestions that receivers would actually like. The study most related to this question compared people's ability to predict a stranger's movie ratings based on part of that person's rating profile to predictions from a standard collaborative filtering algorithm @cite_5 . On balance, people were not as accurate---and, interestingly, got worse as the profile became more similar to their own. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2075329139"
],
"abstract": [
"Algorithmic recommender systems attempt to predict which items a target user will like based on information about the user's prior preferences and the preferences of a larger community. After more than a decade of widespread use, researchers and system users still debate whether such \"impersonal\" recommender systems actually perform as well as human recommenders. We compare the performance of MovieLens algorithmic predictions with the recommendations made, based on the same user profiles, by active MovieLens users. We found that algorithmic collaborative filtering outperformed humans on average, though some individuals outperformed the system substantially and humans on average outperformed the system on certain prediction tasks."
]
} |
1412.1424 | 1987876819 | People regularly share items using online social media. However, people's decisions around sharing---who shares what to whom and why---are not well understood. We present a user study involving 87 pairs of Facebook users to understand how people make their sharing decisions. We find that even when sharing to a specific individual, people's own preference for an item (individuation) dominates over the recipient's preferences (altruism). People's open-ended responses about how they share, however, indicate that they do try to personalize shares based on the recipient. To explain these contrasting results, we propose a novel process model of sharing that takes into account people's preferences and the salience of an item. We also present encouraging results for a sharing prediction model that incorporates both the senders' and the recipients' preferences. These results suggest improvements to both algorithms that support sharing in social media and to information diffusion models. | In this paper, we study how well people's suggestions match the recipient's interests when they share items with known friends and compare their results with algorithmic recommendations. Instead of making inferences from profile information @cite_5 , people rely on their own knowledge about a friend to choose which items to share, which we see as a more natural recommendation scenario. For consistency in terminology, we use recommendations to refer to algorithmic recommendations and shares to refer to human-generated directed recommendations in the rest of the paper. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2075329139"
],
"abstract": [
"Algorithmic recommender systems attempt to predict which items a target user will like based on information about the user's prior preferences and the preferences of a larger community. After more than a decade of widespread use, researchers and system users still debate whether such \"impersonal\" recommender systems actually perform as well as human recommenders. We compare the performance of MovieLens algorithmic predictions with the recommendations made, based on the same user profiles, by active MovieLens users. We found that algorithmic collaborative filtering outperformed humans on average, though some individuals outperformed the system substantially and humans on average outperformed the system on certain prediction tasks."
]
} |
1412.1424 | 1987876819 | People regularly share items using online social media. However, people's decisions around sharing---who shares what to whom and why---are not well understood. We present a user study involving 87 pairs of Facebook users to understand how people make their sharing decisions. We find that even when sharing to a specific individual, people's own preference for an item (individuation) dominates over the recipient's preferences (altruism). People's open-ended responses about how they share, however, indicate that they do try to personalize shares based on the recipient. To explain these contrasting results, we propose a novel process model of sharing that takes into account people's preferences and the salience of an item. We also present encouraging results for a sharing prediction model that incorporates both the senders' and the recipients' preferences. These results suggest improvements to both algorithms that support sharing in social media and to information diffusion models. | In computer science, sharing is most commonly studied as a component of information diffusion models @cite_31 @cite_22 @cite_21 @cite_25 @cite_33 . These models simulate the spread of items in a network, where people adopt an item through either probabilistic transfer between connected people or based on a threshold number of adoptions in a person's neighborhood. However, these models don't actually explain most adoption in social media @cite_20 because the viral analogy breaks down. Sharing is a process shaped by social forces such as people's willingness to diffuse @cite_24 , attention to targets' needs @cite_29 , and relations between sharer and target including tie strength @cite_32 and homophily. | {
"cite_N": [
"@cite_22",
"@cite_33",
"@cite_29",
"@cite_21",
"@cite_32",
"@cite_24",
"@cite_31",
"@cite_25",
"@cite_20"
],
"mid": [
"2061820396",
"1964869462",
"2156939823",
"",
"1971526329",
"1963343385",
"2114696370",
"2113889316",
"2028055861"
],
"abstract": [
"Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in game-theoretic settings, and the effects of \"word of mouth\" in the promotion of new products. Recently, motivated by the design of viral marketing strategies, Domingos and Richardson posed a fundamental algorithmic problem for such social network processes: if we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?We consider this problem in several of the most widely studied models in social network analysis. The optimization problem of selecting the most influential nodes is NP-hard here, and we provide the first provable approximation guarantees for efficient algorithms. Using an analysis framework based on submodular functions, we show that a natural greedy strategy obtains a solution that is provably within 63 of optimal for several classes of models; our framework suggests a general approach for reasoning about the performance guarantees of algorithms for these types of influence problems in social networks.We also provide computational experiments on large collaboration networks, showing that in addition to their provable guarantees, our approximation algorithms significantly out-perform node-selection heuristics based on the well-studied notions of degree centrality and distance centrality from the field of social networks.",
"Online social networking sites like MySpace, Facebook, and Flickr have become a popular way to share and disseminate content. Their massive popularity has led to viral marketing techniques that attempt to spread content, products, and ideas on these sites. However, there is little data publicly available on viral propagation in the real world and few studies have characterized how information spreads over current online social networks. In this paper, we collect and analyze large-scale traces of information dissemination in the Flickr social network. Our analysis, based on crawls of the favorite markings of 2.5 million users on 11 million photos, aims at answering three key questions: (a) how widely does information propagate in the social network? (b) how quickly does information propagate? and (c) what is the role of word-of-mouth exchanges between friends in the overall propagation of information in the network? Contrary to viral marketing intuition,'' we find that (a) even popular photos do not spread widely throughout the network, (b) even popular photos spread slowly through the network, and (c) information exchanged between friends is likely to account for over 50 of all favorite-markings, but with a significant delay at each hop.",
"To find interesting, personally relevant web content, people rely on friends and colleagues to pass links along as they encounter them. In this paper, we study and augment link-sharing via e-mail, the most popular means of sharing web content today. Armed with survey data indicating that active sharers of novel web content are often those that actively seek it out, we developed FeedMe, a plug-in for Google Reader that makes directed sharing of content a more salient part of the user experience. FeedMe recommends friends who may be interested in seeing content that the user is viewing, provides information on what the recipient has seen and how many emails they have received recently, and gives recipients the opportunity to provide lightweight feedback when they appreciate shared content. FeedMe introduces a novel design space within mixed-initiative social recommenders: friends who know the user voluntarily vet the material on the user's behalf. We performed a two-week field experiment (N=60) and found that FeedMe made it easier and more enjoyable to share content that recipients appreciated and would not have found otherwise.",
"",
"This article presents a network analysis of word-of-mouth referral behavior in a natural environment. The relational properties of tie strength and homophily were employed to examine referral behavior at micro and macro levels of inquiry. The study demonstrates different roles played by weak and strong social ties. At the macro level, weak ties displayed an important bridging function, allowing information to travel from one distinct subgroup of referral actors to another subgroup in the broader social system. At the micro level, strong and homophilous ties were more likely to be activated for the flow of referral information. Strong ties were also perceived as more influential than weak ties, and they were more likely to be utilized as sources of information for related goods.",
"Predicting the diffusion of information on social networks is a key problem for applications like Opinion Leader Detection, Buzz Detection or Viral Marketing. Many recent diffusion models are direct extensions of the Cascade and Threshold models, initially proposed for epidemiology and social studies. In such models, the diffusion process is based on the dynamics of interactions between neighbor nodes in the network (the social pressure), and largely ignores important dimensions as the content of the piece of information diffused. We propose here a new family of probabilistic models that aims at predicting how a content diffuses in a network by making use of additional dimensions: the content of the piece of information diffused, user's profile and willingness to diffuse. These models are illustrated and compared with other approaches on two blog datasets. The experimental results obtained on these datasets show that taking into account the content of the piece of information diffused is important to accurately model the diffusion process.",
"The origin of large but rare cascades that are triggered by small initial shocks is a phenomenon that manifests itself as diversely as cultural fads, collective action, the diffusion of norms and innovations, and cascading failures in infrastructure and organizational networks. This paper presents a possible explanation of this phenomenon in terms of a sparse, random network of interacting agents whose decisions are determined by the actions of their neighbors according to a simple threshold rule. Two regimes are identified in which the network is susceptible to very large cascades—herein called global cascades—that occur very rarely. When cascade propagation is limited by the connectivity of the network, a power law distribution of cascade sizes is observed, analogous to the cluster size distribution in standard percolation theory and avalanches in self-organized criticality. But when the network is highly connected, cascade propagation is limited instead by the local stability of the nodes themselves, and the size distribution of cascades is bimodal, implying a more extreme kind of instability that is correspondingly harder to anticipate. In the first regime, where the distribution of network neighbors is highly skewed, it is found that the most connected nodes are far more likely than average nodes to trigger cascades, but not in the second regime. Finally, it is shown that heterogeneity plays an ambiguous role in determining a system's stability: increasingly heterogeneous thresholds make the system more vulnerable to global cascades; but an increasingly heterogeneous degree distribution makes it less vulnerable.",
"We study the dynamics of information propagation in environments of low-overhead personal publishing, using a large collection of WebLogs over time as our example domain. We characterize and model this collection at two levels. First, we present a macroscopic characterization of topic propagation through our corpus, formalizing the notion of long-running \"chatter\" topics consisting recursively of \"spike\" topics generated by outside world events, or more rarely, by resonances within the community. Second, we present a microscopic characterization of propagation from individual to individual, drawing on the theory of infectious diseases to model the flow. We propose, validate, and employ an algorithm to induce the underlying propagation network from a sequence of posts, and report on the results.",
"Models of networked diffusion that are motivated by analogy with the spread of infectious disease have been applied to a wide range of social and economic adoption processes, including those related to new products, ideas, norms and behaviors. However, it is unknown how accurately these models account for the empirical structure of diffusion over networks. Here we describe the diffusion patterns arising from seven online domains, ranging from communications platforms to networked games to microblogging services, each involving distinct types of content and modes of sharing. We find strikingly similar patterns across all domains. In particular, the vast majority of cascades are small, and are described by a handful of simple tree structures that terminate within one degree of an initial adopting \"seed.\" In addition we find that structures other than these account for only a tiny fraction of total adoptions; that is, adoptions resulting from chains of referrals are extremely rare. Finally, even for the largest cascades that we observe, we find that the bulk of adoptions often takes place within one degree of a few dominant individuals. Together, these observations suggest new directions for modeling of online adoption processes."
]
} |
1412.1205 | 170290551 | In this paper, we present a novel yet simple homotopy proximal mapping algorithm for compressive sensing. The algorithm adopts a simple proximal mapping of the @math norm at each iteration and gradually reduces the regularization parameter for the @math norm. We prove a global linear convergence of the proposed homotopy proximal mapping (HPM) algorithm for solving compressive sensing under three different settings (i) sparse signal recovery under noiseless measurements, (ii) sparse signal recovery under noisy measurements, and (iii) nearly-sparse signal recovery under sub-gaussian noisy measurements. In particular, we show that when the measurement matrix satisfies Restricted Isometric Properties (RIP), our theoretical results in settings (i) and (ii) almost recover the best condition on the RIP constants for compressive sensing. In addition, in setting (iii), our results for sparse signal recovery are better than the previous results, and furthermore our analysis explicitly exhibits that more observations lead to not only more accurate recovery but also faster convergence. Compared with previous studies on linear convergence for sparse signal recovery, our algorithm is simple and efficient, and our results are better and provide more insights. Finally our empirical studies provide further support for the proposed homotopy proximal mapping algorithm and verify the theoretical results. | It is worth mentioning that there exist a battery of studies on establishing sharper conditions on the RIP constants for exact or accurate recovery (see and references therein). @cite_4 established sharpest condition on the RIP constant @math for @math . In particular, they show that @math for @math is sufficient for exact recovery under noiseless measurements and accurate recovery under noisy measurements. Nevertheless, we make no attempts to sharpen the condition on RIP constants but rather focus on the optimization algorithms and their recovery properties. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2000150201"
],
"abstract": [
"This paper considers compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool, which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while yielding sharp results. It is shown that for any given constant t ≥ 4 3, in compressed sensing, δtkA 0, δtkA <; √(t-1 t) + e is not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar results also hold for matrix recovery. In addition, the conditions δtkA <; √((t-)1 t) and δtrM <; √((t-1) t) are also shown to be sufficient, respectively, for stable recovery of approximately sparse signals and low-rank matrices in the noisy case."
]
} |
1412.1205 | 170290551 | In this paper, we present a novel yet simple homotopy proximal mapping algorithm for compressive sensing. The algorithm adopts a simple proximal mapping of the @math norm at each iteration and gradually reduces the regularization parameter for the @math norm. We prove a global linear convergence of the proposed homotopy proximal mapping (HPM) algorithm for solving compressive sensing under three different settings (i) sparse signal recovery under noiseless measurements, (ii) sparse signal recovery under noisy measurements, and (iii) nearly-sparse signal recovery under sub-gaussian noisy measurements. In particular, we show that when the measurement matrix satisfies Restricted Isometric Properties (RIP), our theoretical results in settings (i) and (ii) almost recover the best condition on the RIP constants for compressive sensing. In addition, in setting (iii), our results for sparse signal recovery are better than the previous results, and furthermore our analysis explicitly exhibits that more observations lead to not only more accurate recovery but also faster convergence. Compared with previous studies on linear convergence for sparse signal recovery, our algorithm is simple and efficient, and our results are better and provide more insights. Finally our empirical studies provide further support for the proposed homotopy proximal mapping algorithm and verify the theoretical results. | Recently, several algorithms exhibit global linear convergence for the BPDN problem. @cite_2 studied an optimization problem ) for statistical recovery. They used a different update where @math , and @math is a parameter related to the restricted smoothness of the loss function. They proved a global linear convergence of the above update with @math for finding a solution up to the statistical tolerance. @cite_0 studied a proximal-gradient homotopy gradient method for solving ). They iteratively solve the problem ) by the proximal gradient descent with a decreasing regularization parameter @math and an increasing accuracy at each stage, and use the solution obtained at each stage to warm start the next stage. A global linear convergence was also established. | {
"cite_N": [
"@cite_0",
"@cite_2"
],
"mid": [
"2161227280",
"1874232560"
],
"abstract": [
"We consider solving the @math -regularized least-squares ( @math -LS) problem in the context of sparse recovery for applications such as compressed sensing. The standard proximal gradient method, also known as iterative soft-thresholding when applied to this problem, has low computational cost per iteration but a rather slow convergence rate. Nevertheless, when the solution is sparse, it often exhibits fast linear convergence in the final stage. We exploit the local linear convergence using a homotopy continuation strategy, i.e., we solve the @math -LS problem for a sequence of decreasing values of the regularization parameter, and use an approximate solution at the end of each stage to warm start the next stage. Although similar strategies have been studied in the literature, there have been no theoretical analysis of their global iteration complexity. This paper shows that under suitable assumptions for sparse recovery, the proposed homotopy strategy ensures that all iterates along the homotopy sol...",
"Many statistical M-estimators are based on convex optimization problems formed by the weighted sum of a loss function with a norm-based regularizes We analyze the convergence rates of first-order gradient methods for solving such problems within a high-dimensional framework that allows the data dimension d to grow with (and possibly exceed) the sample size n. This high-dimensional structure precludes the usual global assumptions— namely, strong convexity and smoothness conditions—that underlie classical optimization analysis. We define appropriately restricted versions of these conditions, and show that they are satisfied with high probability for various statistical models. Under these conditions, our theory guarantees that Nesterov's first-order method [12] has a globally geometric rate of convergence up to the statistical precision of the model, meaning the typical Euclidean distance between the true unknown parameter θ* and the optimal solution ^θ. This globally linear rate is substantially faster than previous analyses of global convergence for specific methods that yielded only sublinear rates. Our analysis applies to a wide range of M-estimators and statistical models, including sparse linear regression using Lasso (l1-regularized regression), group Lasso, block sparsity, and low-rank matrix recovery using nuclear norm regularization. Overall, this result reveals an interesting connection between statistical precision and computational efficiency in high-dimensional estimation."
]
} |
1412.1265 | 2952304308 | This paper designs a high-performance deep convolutional network (DeepID2+) for face recognition. It is learned with the identification-verification supervisory signal. By increasing the dimension of hidden representations and adding supervision to early convolutional layers, DeepID2+ achieves new state-of-the-art on LFW and YouTube Faces benchmarks. Through empirical studies, we have discovered three properties of its deep neural activations critical for the high performance: sparsity, selectiveness and robustness. (1) It is observed that neural activations are moderately sparse. Moderate sparsity maximizes the discriminative power of the deep net as well as the distance between images. It is surprising that DeepID2+ still can achieve high recognition accuracy even after the neural responses are binarized. (2) Its neurons in higher layers are highly selective to identities and identity-related attributes. We can identify different subsets of neurons which are either constantly excited or inhibited when different identities or attributes are present. Although DeepID2+ is not taught to distinguish attributes during training, it has implicitly learned such high-level concepts. (3) It is much more robust to occlusions, although occlusion patterns are not included in the training set. | Sparse representation-based classification @cite_36 @cite_10 @cite_31 @cite_35 @cite_5 @cite_1 was extensively studied for face recognition with occlusions. Tang al @cite_23 proposed Robust Boltzmann Machine to distinguish corrupted pixels and learn latent representations. These methods designed components explicitly handling occlusions, while we show that features learned by DeepID2+ have implicitly encoded invariance to occlusions. This is naturally achieved without adding regulation to models or artificial occlusion patterns to training data. | {
"cite_N": [
"@cite_35",
"@cite_36",
"@cite_1",
"@cite_23",
"@cite_5",
"@cite_31",
"@cite_10"
],
"mid": [
"2132467081",
"2129812935",
"1510982829",
"2054814877",
"",
"2033241812",
""
],
"abstract": [
"As a recently proposed technique, sparse representation based classification (SRC) has been widely used for face recognition (FR). SRC first codes a testing sample as a sparse linear combination of all the training samples, and then classifies the testing sample by evaluating which class leads to the minimum representation error. While the importance of sparsity is much emphasized in SRC and many related works, the use of collaborative representation (CR) in SRC is ignored by most literature. However, is it really the l 1 -norm sparsity that improves the FR accuracy? This paper devotes to analyze the working mechanism of SRC, and indicates that it is the CR but not the l1-norm sparsity that makes SRC powerful for face classification. Consequently, we propose a very simple yet much more efficient face classification scheme, namely CR based classification with regularized least square (CRC_RLS). The extensive experiments clearly show that CRC_RLS has very competitive classification results, while it has significantly less complexity than SRC.",
"We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.",
"Sparse representation based classification (SRC) methods have recently drawn much attention in face recognition, due to their good performance and robustness against misalignment, illumination variation, and occlusion. They assume the errors caused by image variations can be modeled as pixel-wisely sparse. However, in many practical scenarios these errors are not truly pixel-wisely sparse but rather sparsely distributed with structures, i.e., they constitute contiguous regions distributed at different face positions. In this paper, we introduce a class of structured sparsity-inducing norms into the SRC framework, to model various corruptions in face images caused by misalignment, shadow (due to illumination change), and occlusion. For practical face recognition, we develop an automatic face alignment method based on minimizing the structured sparsity norm. Experiments on benchmark face datasets show improved performance over SRC and other alternative methods.",
"While Boltzmann Machines have been successful at unsupervised learning and density modeling of images and speech data, they can be very sensitive to noise in the data. In this paper, we introduce a novel model, the Robust Boltzmann Machine (RoBM), which allows Boltzmann Machines to be robust to corruptions. In the domain of visual recognition, the RoBM is able to accurately deal with occlusions and noise by using multiplicative gating to induce a scale mixture of Gaussians over pixels. Image denoising and in-painting correspond to posterior inference in the RoBM. Our model is trained in an unsupervised fashion with unlabeled noisy data and can learn the spatial structure of the occluders. Compared to standard algorithms, the RoBM is significantly better at recognition and denoising on several face databases.",
"",
"It has been of great interest to find sparse and or nonnegative representations in computer vision literature. In this paper we propose a novel method to such a purpose and refer to it as nonnegative curds and whey (NNCW). The NNCW procedure consists of two stages. In the first stage we consider a set of sparse and nonnegative representations of a test image, each of which is a linear combination of the images within a certain class, by solving a set of regressiontype nonnegative matrix factorization problems. In the second stage we incorporate these representations into a new sparse and nonnegative representation by using the group nonnegative garrote. This procedure is particularly appropriate for discriminant analysis owing to its supervised and nonnegativity nature in sparsity pursuing. Experiments on several benchmark face databases and Caltech 101 image dataset demonstrate the efficiency and effectiveness of our nonnegative curds and whey method.",
""
]
} |
1412.1395 | 2401518422 | Collisions are a main cause of throughput degradation in wireless local area networks. The current contention mechanism used in the IEEE 802.11 networks is called carrier sense multiple access with collision avoidance (CSMA CA). It uses a binary exponential backoff technique to randomize each contender attempt of transmitting, effectively reducing the collision probability. Nevertheless, CSMA CA relies on a random backoff that while effective and fully decentralized, in principle is unable to completely eliminate collisions, therefore degrading the network throughput as more contenders attempt to share the channel. To overcome these situations, carrier sense multiple access with enhanced collision avoidance (CSMA ECA) is able to create a collision-free schedule in a fully decentralized manner using a deterministic backoff after successful transmissions. Hysteresis and fair share are two extensions of CSMA ECA to support a large number of contenders in a collision-free schedule. CSMA ECA offers better throughput than CSMA CA and short-term throughput fairness. This paper describes CSMA ECA and its extensions. In addition, it provides the first evaluation results of CSMA ECA with non-saturated traffic, channel errors, and its performance when coexisting with CSMA CA nodes. Furthermore, it describes the effects of imperfect clocks over CSMA ECA and presents a mechanism to leverage the impact of channel errors and the addition withdrawal of nodes over collision-free schedules. Finally, the experimental results on throughput and lost frames from a CSMA ECA implementation using commercial hardware and open-source firmware are presented. | Performing time slot reservation for each transmission is a well known technique for increasing the throughput and mantanining Quality of Service (QoS) in TDMA schemes, like LTE @cite_11 . Applying the same concept to CSMA networks by modifying DCF's random backoff proceedure provides similar benefits @cite_18 . The following are MAC protocols for WLANs, decentralised and capable of attaining greater throughput than CSMA CA by constructing collision-free schedules using reservation techniques. A survey of collision-free MAC protocols for WLANs is presented in @cite_30 . In this paper we only overview those that are similar to CSMA ECA. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_11"
],
"mid": [
"",
"2143747785",
"1506432011"
],
"abstract": [
"",
"This paper proposes a semi-random backoff (SRB) method that enables resource reservation in contention-based wireless LANs. The proposed SRB is fundamentally different from traditional random backoff methods because it provides an easy migration path from random backoffs to deterministic slot assignments. The central idea of the SRB is for the wireless station to set its backoff counter to a deterministic value upon a successful packet transmission. This deterministic value will allow the station to reuse the time-slot in consecutive backoff cycles. When multiple stations with successful packet transmissions reuse their respective time-slots, the collision probability is reduced, and the channel achieves the equivalence of resource reservation. In case of a failed packet transmission, a station will revert to the standard random backoff method and probe for a new available time-slot. The proposed SRB method can be readily applied to both 802.11 DCF and 802.11e EDCA networks with minimum modification to the existing DCF EDCA implementations. Theoretical analysis and simulation results validate the superior performance of the SRB for small-scale and heavily loaded wireless LANs. When combined with an adaptive mechanism and a persistent backoff process, SRB can also be effective for large-scale and lightly loaded wireless networks.",
"The use of the unlicensed spectrum by LTE networks (LTE-U or LAA-LTE) is being considered by mobile operators in order to satisfy increasing traffic demands and to make better use of the licensed spectrum. However, coexistence issues arise when LTE-U coverage overlaps with other technologies currently operating in unlicensed bands, in particular WiFi. Since LTE uses a TDMA OFDMA scheduled approach, coexisting WiFi networks may face starvation if the channel is fully occupied by LTE-U transmissions. In this paper we derive a novel proportional fair allocation scheme that ensures fair coexistence between LTE-U and WiFi. Importantly, we find that the proportional fair allocation is qualitatively different from previously consideredWiFi-only settings and that since the resulting allocation requires only quite limited knowledge of network parameters it is potentially easy to implement in practice, without the need for message-passing between heterogeneous networks."
]
} |
1412.1395 | 2401518422 | Collisions are a main cause of throughput degradation in wireless local area networks. The current contention mechanism used in the IEEE 802.11 networks is called carrier sense multiple access with collision avoidance (CSMA CA). It uses a binary exponential backoff technique to randomize each contender attempt of transmitting, effectively reducing the collision probability. Nevertheless, CSMA CA relies on a random backoff that while effective and fully decentralized, in principle is unable to completely eliminate collisions, therefore degrading the network throughput as more contenders attempt to share the channel. To overcome these situations, carrier sense multiple access with enhanced collision avoidance (CSMA ECA) is able to create a collision-free schedule in a fully decentralized manner using a deterministic backoff after successful transmissions. Hysteresis and fair share are two extensions of CSMA ECA to support a large number of contenders in a collision-free schedule. CSMA ECA offers better throughput than CSMA CA and short-term throughput fairness. This paper describes CSMA ECA and its extensions. In addition, it provides the first evaluation results of CSMA ECA with non-saturated traffic, channel errors, and its performance when coexisting with CSMA CA nodes. Furthermore, it describes the effects of imperfect clocks over CSMA ECA and presents a mechanism to leverage the impact of channel errors and the addition withdrawal of nodes over collision-free schedules. Finally, the experimental results on throughput and lost frames from a CSMA ECA implementation using commercial hardware and open-source firmware are presented. | Zero Collision MAC (ZC-MAC) @cite_31 achieves a zero collision schedule for WLANs in a fully decentralised way. It does so by allowing contenders to reserve one empty slot from a predefined virtual schedule of @math -slots in length. Backlogged stations pick a slot in the virtual cycle to attempt transmission. If two or more stations picked the same slot in the cycle, their transmissions will eventually collide. This forces the involved contenders to randomly and uniformly select other empty slot from those detected empty in the previous cycle plus the slot where they collided. When all @math stations reserve a different slot, a collision-free schedule is achieved. | {
"cite_N": [
"@cite_31"
],
"mid": [
"1484545291"
],
"abstract": [
"This paper proposes and analyzes a distributed MAC protocol that achieves zero collision with no control message exchange nor synchronization. ZC (ZeroCollision) is neither reservation-based nor dynamic TDMA; the protocol supports variable-length packets and does not lose efficiency when some of the stations do not transmit. At the same time, ZC is not a CSMA; in its steady state, it is completely collision-free. The stations transmit repeatedly in a round-robin order once the convergence state is reached. If some stations skip their turn, their transmissions are replaced by idle @math -second mini-slots that enable the other stations to keep track of their order. Because of its short medium access delay and its efficiency, the protocol supports both real-time and elastic applications. The protocol allows for nodes leaving and joining the network; it can allocate more throughput to specific nodes (such as an access point). The protocol is robust against carrier sensing errors or clock drift. While collision avoidance is guaranteed in a single collision domain, it is not the case in a multiple collision one. However, experiments show ZC supports a comparable amount of goodput to CSMA in a multiple collision domain environment. The paper presents an analysis and extensive simulations of the protocol, confirming that ZC outperforms both CSMA and TDMA at high and low load."
]
} |
1412.1395 | 2401518422 | Collisions are a main cause of throughput degradation in wireless local area networks. The current contention mechanism used in the IEEE 802.11 networks is called carrier sense multiple access with collision avoidance (CSMA CA). It uses a binary exponential backoff technique to randomize each contender attempt of transmitting, effectively reducing the collision probability. Nevertheless, CSMA CA relies on a random backoff that while effective and fully decentralized, in principle is unable to completely eliminate collisions, therefore degrading the network throughput as more contenders attempt to share the channel. To overcome these situations, carrier sense multiple access with enhanced collision avoidance (CSMA ECA) is able to create a collision-free schedule in a fully decentralized manner using a deterministic backoff after successful transmissions. Hysteresis and fair share are two extensions of CSMA ECA to support a large number of contenders in a collision-free schedule. CSMA ECA offers better throughput than CSMA CA and short-term throughput fairness. This paper describes CSMA ECA and its extensions. In addition, it provides the first evaluation results of CSMA ECA with non-saturated traffic, channel errors, and its performance when coexisting with CSMA CA nodes. Furthermore, it describes the effects of imperfect clocks over CSMA ECA and presents a mechanism to leverage the impact of channel errors and the addition withdrawal of nodes over collision-free schedules. Finally, the experimental results on throughput and lost frames from a CSMA ECA implementation using commercial hardware and open-source firmware are presented. | L-MAC is able to achieve higher throughput than CSMA CA with a very fast convergence speed. Nevertheless, the choice of @math suppose a previous knowledge of the number of empty slots ( @math , where @math is the number of contenders), which is not easily available to CSMA CA or may require a centralised entity @cite_33 . | {
"cite_N": [
"@cite_33"
],
"mid": [
"1600358513"
],
"abstract": [
"Carrier sense multiple access with enhanced collision avoidance (CSMA ECA) is a distributed MAC protocol that allows collision-free access to the medium in WLANs. The only difference between CSMA ECA and the well-known CSMA CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This paper shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov chain theory is used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The paper also introduces CSMA E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA E2CA converges quicker to collision-free operation and delivers higher performance than CSMA ECA, specially in harsh wireless scenarios with high frame-error rates. The last part of the paper addresses scenarios with a large number of contenders. We suggest dynamic parameter adjustment techniques to accommodate a varying (and potentially high) number of contenders. The effectiveness of these adjustments in preventing collisions is validated by means of simulation."
]
} |
1412.1060 | 1562692307 | We prove a new upper bound on the number of @math -rich lines (lines with at least @math points) in a truly' @math -dimensional configuration of points @math . More formally, we show that, if the number of @math -rich lines is significantly larger than @math then there must exist a large subset of the points contained in a hyperplane. We conjecture that the factor @math can be replaced with a tight @math . If true, this would generalize the classic Szemer 'edi-Trotter theorem which gives a bound of @math on the number of @math -rich lines in a planar configuration. This conjecture was shown to hold in @math in the seminal work of Guth and Katz GK10 and was also recently proved over @math (under some additional restrictions) SS14 . For the special case of arithmetic progressions ( @math collinear points that are evenly distanced) we give a bound that is tight up to low order terms, showing that a @math -dimensional grid achieves the largest number of @math -term progressions. The main ingredient in the proof is a new method to find a low degree polynomial that vanishes on many of the rich lines. Unlike previous applications of the polynomial method, we do not find this polynomial by interpolation. The starting observation is that the degree @math Veronese embedding takes @math -collinear points to @math linearly dependent images. Hence, each collinear @math -tuple of points, gives us a dependent @math -tuple of images. We then use the design-matrix method of BDWY12 to convert these 'local' linear dependencies into a global one, showing that all the images lie in a hyperplane. This then translates into a low degree polynomial vanishing on the original set. | Similarly, using the results of in @cite_10 , we can prove the following theorem from which a slightly weaker version of Conjecture in @math trivially follows (see Appendix ). We are not aware of any examples where points arranged on a quadric hypersurface in @math result in significantly more rich lines than in a four dimensional grid. It is, however, possible that one needs to weaken Conjecture so that for some @math , an @math -dimensional hypersurface of constant degree (possibly depending on @math ) contains @math points. | {
"cite_N": [
"@cite_10"
],
"mid": [
"326952175"
],
"abstract": [
"We prove an incidence theorem for points and planes in the projective space @math over any field @math , whose characteristic @math An incidence is viewed as an intersection along a line of a pair of two-planes from two canonical rulings of the Klein quadric. The Klein quadric can be traversed by a generic hyperplane, yielding a line-line incidence problem in a three-quadric, the Klein image of a regular line complex. This hyperplane can be chosen so that at most two lines meet. Hence, one can apply an algebraic theorem of Guth and Katz, with a constraint involving @math if @math . This yields a bound on the number of incidences between @math points and @math planes in @math , with @math as @math where @math is the maximum number of collinear planes, provided that @math if @math . Examples show that this bound cannot be improved without additional assumptions. This gives one a vehicle to establish geometric incidence estimates when @math . For a non-collinear point set @math and a non-degenerate symmetric or skew-symmetric bilinear form @math , the number of distinct values of @math on pairs of points of @math is @math . This is also the best known bound over @math , where it follows from the Szemer 'edi-Trotter theorem. Also, a set @math , not supported in a single semi-isotropic plane contains a point, from which @math distinct distances to other points of @math are attained."
]
} |
1412.0100 | 67171679 | State-of-the-art visual recognition and detection systems increasingly rely on large amounts of training data and complex classifiers. Therefore it becomes increasingly expensive both to manually annotate datasets and to keep running times at levels acceptable for practical applications. In this paper, we propose two solutions to address these issues. First, we introduce a weakly supervised, segmentation-based approach to learn accurate detectors and image classifiers from weak supervisory signals that provide only approximate constraints on target localization. We illustrate our system on the problem of action detection in static images (Pascal VOC Actions 2012), using human visual search patterns as our training signal. Second, inspired from the saccade-and-fixate operating principle of the human visual system, we use reinforcement learning techniques to train efficient search models for detection. Our sequential method is weakly supervised and general (it does not require eye movements), finds optimal search strategies for any given detection confidence function and achieves performance similar to exhaustive sliding window search at a fraction of its computational cost. | Many methods have been proposed to accelerate detectors. Prominent techniques are based on branch-and-bound heuristics @cite_24 @cite_15 , hierarchies of classifiers @cite_4 or methods that reuse computation between neighboring regions @cite_20 . In turn, different features have been used in the design of the detector response functions. Deep convolutional neural networks have surpassed methods based on support vector machines on many computer vision problems, such as the image classification @cite_28 , object classification @cite_1 , object detection @cite_25 , action classification @cite_1 and pose prediction @cite_22 . Multiple instance learning @cite_12 formulations can be seen as a generalization of supervised learning, in which class labels are assigned to sets of training examples. Many algorithmic solutions have been proposed, based on SVMs @cite_2 @cite_16 , CRFs @cite_27 or boosted classifiers @cite_21 (see @cite_10 for a review). | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_28",
"@cite_21",
"@cite_1",
"@cite_24",
"@cite_27",
"@cite_2",
"@cite_12",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_20"
],
"mid": [
"2538008885",
"2113325037",
"",
"",
"2161381512",
"",
"2026581312",
"2108745803",
"2110119381",
"2113201641",
"2106848050",
"",
"2102605133",
""
],
"abstract": [
"Our objective is to obtain a state-of-the art object category detector by employing a state-of-the-art image classifier to search for the object in all possible image sub-windows. We use multiple kernel learning of Varma and Ray (ICCV 2007) to learn an optimal combination of exponential χ2 kernels, each of which captures a different feature channel. Our features include the distribution of edges, dense and sparse visual words, and feature descriptors at different levels of spatial organization.",
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"",
"",
"Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"",
"We address the problem of weakly supervised semantic segmentation. The training images are labeled only by the classes they contain, not by their location in the image. On test images instead, the method must predict a class label for every pixel. Our goal is to enable segmentation algorithms to use multiple visual cues in this weakly supervised setting, analogous to what is achieved by fully supervised methods. However, it is difficult to assess the relative usefulness of different visual cues from weakly supervised training data. We define a parametric family of structured models, were each model weights visual cues in a different way. We propose a Maximum Expected Agreement model selection principle that evaluates the quality of a model from the family without looking at superpixel labels. Searching for the best model is a hard optimization problem, which has no analytic gradient and multiple local optima. We cast it as a Bayesian optimization problem and propose an algorithm based on Gaussian processes to efficiently solve it. Our second contribution is an Extremely Randomized Hashing Forest that represents diverse superpixel features as a sparse binary vector. It enables using appearance models of visual classes that are fast at training and testing and yet accurate. Experiments on the SIFT-flow dataset show a significant improvement over previous weakly supervised methods and even over some fully supervised methods.",
"This paper presents two new formulations of multiple-instance learning as a maximum margin problem. The proposed extensions of the Support Vector Machine (SVM) learning approach lead to mixed integer quadratic programs that can be solved heuristic ally. Our generalization of SVMs makes a state-of-the-art classification technique, including non-linear classification via kernels, available to an area that up to now has been largely dominated by special purpose methods. We present experimental results on a pharmaceutical data set and on applications in automated image indexing and document categorization.",
"The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89 correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms.",
"Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To perform localization, one can take a sliding window approach, but this strongly increases the computational cost, because the classifier function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branch-and-bound scheme that allows efficient maximization of a large class of classifier functions over all possible subimages. It converges to a globally optimal solution typically in sublinear time. We show how our method is applicable to different object detection and retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest neighbor classifiers based on the chi2-distance. We demonstrate state-of-the-art performance of the resulting systems on the UIUC Cars dataset, the PASCAL VOC 2006 dataset and in the PASCAL VOC 2007 competition.",
"We present a new approach to multiple instance learning (MIL) that is particularly effective when the positive bags are sparse (i.e. contain few positive instances). Unlike other SVM-based MIL methods, our approach more directly enforces the desired constraint that at least one of the instances in a positive bag is positive. Using both artificial and real-world data, we experimentally demonstrate that our approach achieves greater accuracy than state-of-the-art MIL methods when positive bags are sparse, and performs competitively when they are not. In particular, our approach is the best performing method for image region classification.",
"",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
""
]
} |
1412.0060 | 1477796738 | We tackle the problem of estimating the 3D pose of an individual's upper limbs (arms+hands) from a chest mounted depth-camera. Importantly, we consider pose estimation during everyday interactions with objects. Past work shows that strong pose+viewpoint priors and depth-based features are crucial for robust performance. In egocentric views, hands and arms are observable within a well defined volume in front of the camera. We call this volume an egocentric workspace. A notable property is that hand appearance correlates with workspace location. To exploit this correlation, we classify arm+hand configurations in a global egocentric coordinate frame, rather than a local scanning window. This greatly simplify the architecture and improves performance. We propose an efficient pipeline which 1) generates synthetic workspace exemplars for training using a virtual chest-mounted camera whose intrinsic parameters match our physical camera, 2) computes perspective-aware depth features on this entire volume and 3) recognizes discrete arm+hand pose classes through a sparse multi-class SVM. Our method provides state-of-the-art hand pose recognition performance from egocentric RGB-D images in real-time. | Hand-object pose estimation: While there is a large body of work on hand-tracking @cite_21 @cite_5 @cite_32 @cite_15 @cite_9 @cite_10 @cite_1 , we focus on hand pose estimation during object manipulations. Object interactions both complicate analysis due to additional occlusions, but also provide additional contextual constraints (hands cannot penetrate object geometry, for example). @cite_4 describe articulated tracker with soft anti-penetration constraints, increasing robustness to occlusion. Hamer describe contextual priors for hands in relation to objects @cite_17 , and demonstrate their effectiveness for increasing tracking accuracy. Objects are easier to animate than hands because they have fewer joint parameters. With this intuition, object motion can be used as an input signal for estimating hand motions @cite_2 . @cite_30 use a large synthetic dataset of hands manipulating objects, similar to us. We differ in our focus on single-image and egocentric analysis. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"2543872873",
"2124419806",
"2098514451",
"2114663654",
"1580997725",
"2036427417",
"",
"2162254475",
"",
"1995905186"
],
"abstract": [
"",
"We present a method for tracking a hand while it is interacting with an object. This setting is arguably the one where hand-tracking has most practical relevance, but poses significant additional challenges: strong occlusions by the object as well as self-occlusions are the norm, and classical anatomical constraints need to be softened due to the external forces between hand and object. To achieve robustness to partial occlusions, we use an individual local tracker for each segment of the articulated structure. The segments are connected in a pairwise Markov random field, which enforces the anatomical hand structure through soft constraints on the joints between adjacent segments. The most likely hand configuration is found with belief propagation. Both range and color data are used as input. Experiments are presented for synthetic data with ground truth and for real data of people manipulating objects.",
"We propose a method that relies on markerless visual observations to track the full articulation of two hands that interact with each-other in a complex, unconstrained manner. We formulate this as an optimization problem whose 54-dimensional parameter space represents all possible configurations of two hands, each represented as a kinematic structure with 26 Degrees of Freedom (DoFs). To solve this problem, we employ Particle Swarm Optimization (PSO), an evolutionary, stochastic optimization method with the objective of finding the two-hands configuration that best explains observations provided by an RGB-D sensor. To the best of our knowledge, the proposed method is the first to attempt and achieve the articulated motion tracking of two strongly interacting hands. Extensive quantitative and qualitative experiments with simulated and real world image sequences demonstrate that an accurate and efficient solution of this problem is indeed feasible.",
"This paper describes a finctionatly-Distributed (FD) hand tmcking method for hand-gesture-based wearable visual interfaces. The method is an extension of the Distributed Monte Carlo (DMC) tracking method which we have developed. The method provides coarse but rapid hand tracking results with the lowest possible number of samples on the wearable side, and can reduce latency which causes a decline in usability and performance of gesture-based interfaces. The method also provides the adaptive tracking mechanism by using the suficient number of samples and the hand-color modeling on the infmstructure side. This paper also describes three promising applications of the hand-gesture-based wearable visual interfaces implemented on our wearable systems.",
"Articulated hand-tracking systems have been widely used in virtual reality but are rarely deployed in consumer applications due to their price and complexity. In this paper, we propose an easy-to-use and inexpensive system that facilitates 3-D articulated user-input using the hands. Our approach uses a single camera to track a hand wearing an ordinary cloth glove that is imprinted with a custom pattern. The pattern is designed to simplify the pose estimation problem, allowing us to employ a nearest-neighbor approach to track hands at interactive rates. We describe several proof-of-concept applications enabled by our system that we hope will provide a foundation for new interactions in modeling, animation control and augmented reality.",
"Reliable hand detection and tracking in passive 2D video still remains a challenge. Yet the consumer market for gesture-based interaction is expanding rapidly and surveillance systems that can deduce fine-grained human activities involving hand and arm postures are in high demand. In this paper, we present a hand tracking method that does not require reliable detection. We built it on top of “Flocks of Features” which combines grey-level optical flow, a “flocking” constraint, and a learned foreground color distribution. By adding probabilistic (instead of binary classified) detections based on grey-level appearance as an additional image cue, we show improved tracking performance despite rapid hand movements and posture changes. This helps overcome tracking difficulties in texture-rich and skin-colored environments, improving performance on a 10-minute collection of video clips from 75 to 86 (see examples on our website).",
"Animating hand-object interactions is a frequent task in applications such as the production of 3d movies. Unfortunately this task is difficult due to the hand's many degrees of freedom and the constraints on the hand motion imposed by the geometry of the object. However, the causality between the object state and the hand's pose can be exploited in order to simplify the animation process. In this paper, we present a method that takes an animation of an object as input and automatically generates the corresponding hand motion. This approach is based on the simple observation that objects are easier to animate than hands, since they usually have fewer degrees of freedom. The method is data-driven; sequences of hands manipulating an object are captured semi-automatically with a structured-light setup. The training data is then combined with a new animation of the object in order to generate a plausible animation featuring the hand-object interaction.",
"",
"A method is proposed that can generate a ranked list of plausible three-dimensional hand configurations that best match an input image. Hand pose estimation is formulated as an image database indexing problem, where the closest matches for an input hand image are retrieved from a large database of synthetic hand images. In contrast to previous approaches, the system can function in the presence of clutter, thanks to two novel clutter-tolerant indexing methods. First, a computationally efficient approximation of the image-to-model chamfer distance is obtained by embedding binary edge images into a high-dimensional Euclidean space. Second, a general-purpose, probabilistic line matching method identifies those line segment correspondences between model and input images that are the least likely to have occurred by chance. The performance of this clutter tolerant approach is demonstrated in quantitative experiments with hundreds of real hand images.",
"",
"In this paper, we propose a prior for hand pose estimation that integrates the direct relation between a manipulating hand and a 3d object. This is of particular interest for a variety of applications since many tasks performed by humans require hand-object interaction. Inspired by the ability of humans to learn the handling of an object from a single example, our focus lies on very sparse training data. We express estimated hand poses in local object coordinates and extract for each individual hand segment, the relative position and orientation as well as contact points on the object. The prior is then modeled as a spatial distribution conditioned to the object. Given a new object of the same object class and new hand dimensions, we can transfer the prior by a procedure involving a geometric warp. In our experiments, we demonstrate that the prior may be used to improve the robustness of a 3d hand tracker and to synthesize a new hand grasping a new object. For this, we integrate the prior into a unified belief propagation framework for tracking and synthesis."
]
} |
1412.0060 | 1477796738 | We tackle the problem of estimating the 3D pose of an individual's upper limbs (arms+hands) from a chest mounted depth-camera. Importantly, we consider pose estimation during everyday interactions with objects. Past work shows that strong pose+viewpoint priors and depth-based features are crucial for robust performance. In egocentric views, hands and arms are observable within a well defined volume in front of the camera. We call this volume an egocentric workspace. A notable property is that hand appearance correlates with workspace location. To exploit this correlation, we classify arm+hand configurations in a global egocentric coordinate frame, rather than a local scanning window. This greatly simplify the architecture and improves performance. We propose an efficient pipeline which 1) generates synthetic workspace exemplars for training using a virtual chest-mounted camera whose intrinsic parameters match our physical camera, 2) computes perspective-aware depth features on this entire volume and 3) recognizes discrete arm+hand pose classes through a sparse multi-class SVM. Our method provides state-of-the-art hand pose recognition performance from egocentric RGB-D images in real-time. | Non-parametric recognition: Our work is inspired by non-parametric techniques that make use of synthetic training data @cite_30 @cite_14 @cite_4 @cite_6 @cite_11 . @cite_14 make use of pose-sensitive hashing techniques for efficient matching of synthetic RGB images rendered with Poser. We generate synthetic depth images, mimicking capture conditions of our actual camera. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_4",
"@cite_6",
"@cite_11"
],
"mid": [
"",
"2152926413",
"2543872873",
"2110619642",
"196115574"
],
"abstract": [
"",
"Example-based methods are effective for parameter estimation problems when the underlying system is simple or the dimensionality of the input is low. For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly become prohibitively high. We introduce a new algorithm that learns a set of hashing functions that efficiently index examples relevant to a particular estimation task. Our algorithm extends locality-sensitive hashing, a recently developed method to find approximate neighbors in time sublinear in the number of examples. This method depends critically on the choice of hash functions that are optimally relevant to a particular estimation problem. Experiments demonstrate that the resulting algorithm, which we call parameter-sensitive hashing, can rapidly and accurately estimate the articulated pose of human figures from a large database of example images.",
"We present a method for tracking a hand while it is interacting with an object. This setting is arguably the one where hand-tracking has most practical relevance, but poses significant additional challenges: strong occlusions by the object as well as self-occlusions are the norm, and classical anatomical constraints need to be softened due to the external forces between hand and object. To achieve robustness to partial occlusions, we use an individual local tracker for each segment of the articulated structure. The segments are connected in a pairwise Markov random field, which enforces the anatomical hand structure through soft constraints on the joints between adjacent segments. The most likely hand configuration is found with belief propagation. Both range and color data are used as input. Experiments are presented for synthetic data with ground truth and for real data of people manipulating objects.",
"This paper presents the first semi-supervised transductive algorithm for real-time articulated hand pose estimation. Noisy data and occlusions are the major challenges of articulated hand pose estimation. In addition, the discrepancies among realistic and synthetic pose data undermine the performances of existing approaches that use synthetic data extensively in training. We therefore propose the Semi-supervised Transductive Regression (STR) forest which learns the relationship between a small, sparsely labelled realistic dataset and a large synthetic dataset. We also design a novel data-driven, pseudo-kinematic technique to refine noisy or occluded joints. Our contributions include: (i) capturing the benefits of both realistic and synthetic data via transductive learning, (ii) showing accuracies can be improved by considering unlabelled data, and (iii) introducing a pseudo-kinematic technique to refine articulations efficiently. Experimental results show not only the promising performance of our method with respect to noise and occlusions, but also its superiority over state-of-the-arts in accuracy, robustness and speed.",
"Benchmarking methods for 3d hand tracking is still an open problem due to the difficulty of acquiring ground truth data. We introduce a new dataset and benchmarking protocol that is insensitive to the accumulative error of other protocols. To this end, we create testing frame pairs of increasing difficulty and measure the pose estimation error separately for each of them. This approach gives new insights and allows to accurately study the performance of each feature or method without employing a full tracking pipeline. Following this protocol, we evaluate various directional distances in the context of silhouette-based 3d hand tracking, expressed as special cases of a generalized Chamfer distance form. An appropriate parameter setup is proposed for each of them, and a comparative study reveals the best performing method in this context."
]
} |
1412.0426 | 2090228888 | An undergraduate compilers course poses significant challenges to students, in both the conceptual richness of the major components and in the programming effort necessary to implement them. In this paper, I argue that a related architecture, the interpreter, serves as an effective conceptual framework in which to teach some of the later stages of the compiler pipeline. This framework can serve both to unify some of the major concepts that are taught in a typical undergraduate course and to structure the implementation of a semester-long compiler project. | From a theoretical perspective, it is well known that both semantic analysis and code generation are closely related to a language's interpreter , the former a simulation of program execution on abstract value domains @cite_6 and the latter a specialization of the interpreter with respect to a program's source code @cite_9 . It is therefore unsurprising that the implementation of an interpreter shares many structural characteristics with the implementation of both the semantic analysis and code generation phases. However, the details of these correspondences are likely inaccessible to the typical undergraduate student, and moreover, they are a distraction. An undergraduate course in compiler construction generally focuses on a narrow range of language features, emphasizing instead many real-world concerns such as efficient symbol table construction, separation of front and back ends through an intermediate representation, call stack frames, and (time permitting) various code-improving transformation techniques. As a consequence, interpreters, if they are included at all, generally serve as a foil for the superior performance of compiled code @cite_13 @cite_1 , or else as material for a more breadth-based course on general language implementation @cite_8 @cite_5 . | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_5",
"@cite_13"
],
"mid": [
"",
"1556604985",
"570548980",
"1963705166",
"109232922",
"2230980491"
],
"abstract": [
"",
"Functions, types and expressions programming languages and their operational semantics compilation partial evaluation of a flow chart languages partial evaluation of a first-order functional languages the view from Olympus partial evaluation of the Lambda calculus partial evaluation of prolog aspects of Similix - a partial evaluator for a subset of scheme partial evaluation of C applications of partial evaluation termination of partial evaluation program analysis more general program transformation guide to the literature the self-applicable scheme specializer.",
"1. Introduction 2. Language Processors 3. Compilation 4. Syntactic Analysis 5. Contextual Analysis 6. Run-time Organization 7. Code Generation 8. Interpretation 9. Conclusion",
"Starting from a denotational semantics of the eager untyped lambda-calculus with explicit runtime errors, the standard collecting semantics is defined as specifying the strongest program properties. By a first abstraction, a new sound type collecting semantics is derived in compositional fix-point form. Then by successive (semi-dual) Galois connection based abstractions, type systems and or type inference algorithms are designed as abstract semantics or abstract interpreters approximating the type collecting semantics. This leads to a hierarchy of type systems, which is part of the lattice of abstract interpretations of the untyped lambda-calculus. This hierarchy includes two new � la Church Curry polytype systems. Abstractions of this polytype semantics lead to classical Milner Mycroft and Damas Milner polymorphic type schemes, Church Curry monotypes and Hindley principal typing algorithm. This shows that types are abstract interpretations.",
"As a textbook suitable for the classroom or self-study, Michael Scott's Programming Language Pragmatics provides a worthy tour of the theory and practice of how programming languages are run on today's computers. Clearly organized and filled with a wide-ranging perspective on over 40 different languages, this book will be appreciated for its depth and breadth of coverage on an essential topic in computer science. With references to dozens of programming languages, from Ada to Turing and everything in between (including C, C++, Java, and Perl), this book is a truly in-depth guide to how code is compiled (or interpreted) and executed on computer hardware. Early chapters tend to be slightly more theoretical (with coverage of regular expressions and context-free grammars) and will be most valuable to the computer science student, but much of this book is accessible to anyone seeking to widen their knowledge (especially since recent standards surrounding XML make use of some of the same vocabulary presented here). The book has a comprehensive discussion of compilation and linking, as well as how data types are implemented in memory. Sections on functional and logical programming (illustrated with Scheme and Prolog, which are often used in AI research) can expand your understanding of how programming languages work. Final sections on the advantages--and complexities--of concurrent processing, plus a nice treatment of code optimization techniques, round out the text here. Each chapter provides numerous exercises, so you can try out the ideas on your own. Students will benefit from the practical examples here, drawn from a wide range of languages. If you are a self-taught developer, the very approachable tutorial can give you perspective on the formal definitions of many computer languages, which can help you master new ones more effectively. --Richard Dragan Topics covered: A survey of today's programming languages, compilation vs. interpretation, the compilation process, regular expression and context-free grammars, scanners and parsers, names, scopes and bindings, scope rules, overloading, semantic analysis, introduction to computer architecture, representing data, instruction sets, 680x0 and MIPs architectures, control flow and expression evaluation, iteration and recursion, data types, type checking, records, arrays, strings, sets, pointers, lists, file I O, subroutines, calling sequences and parameter passing, exception handling, coroutines, compile back-end processing, code generation, linking, object-oriented programming basics, encapsulation and inheritance, late binding, multiple inheritance, functional and logical languages, Scheme and Prolog, programming with concurrency, shared memory and message passing, and code optimization techniques.",
"Long-awaited revision to a unique guide that covers both compilers and interpreters Revised, updated, and now focusing on Java instead of C++, this long-awaited, latest edition of this popular book teaches programmers and software engineering students how to write compilers and interpreters using Java. You?ll write compilers and interpreters as case studies, generating general assembly code for a Java Virtual Machine that takes advantage of the Java Collections Framework to shorten and simplify the code. In addition, coverage includes Java Collections Framework, UML modeling, object-oriented programming with design patterns, working with XML intermediate code, and more."
]
} |
1412.0426 | 2090228888 | An undergraduate compilers course poses significant challenges to students, in both the conceptual richness of the major components and in the programming effort necessary to implement them. In this paper, I argue that a related architecture, the interpreter, serves as an effective conceptual framework in which to teach some of the later stages of the compiler pipeline. This framework can serve both to unify some of the major concepts that are taught in a typical undergraduate course and to structure the implementation of a semester-long compiler project. | On the other hand, the construction of interpreters plays a prominent role in many courses on programming language design. The simplicity of an interpreter's core structure and the close correspondence to a language's semantics makes this a natural teaching tool, both for conceptual organization and for prototypical implementation of various language features @cite_7 @cite_12 . Abelson and Sussman's classic CS1 text @cite_11 even uses this structure to introduce the structure of a simple compiler, though the correspondence between the two is quickly buried in the details of code generation, and their compiler lacks many real-world features such as a semantic analysis phase. | {
"cite_N": [
"@cite_12",
"@cite_7",
"@cite_11"
],
"mid": [
"",
"1583092647",
"2089674328"
],
"abstract": [
"",
"From the Publisher: Designed for the upper division Programming Languages course offered in computer science departments,this text focuses on the principles of the design and implementation of programming languages. The language SCHEME,a dialect of LISP,is used to demonstrate abstraction and representation.",
"From the Publisher: With an analytical and rigorous approach to problem solving and programming techniques,this book is oriented toward engineering. Structure and Interpretation of Computer Programs emphasizes the central role played by different approaches to dealing with time in computational models. Its unique approach makes it appropriate for an introduction to computer science courses,as well as programming languages and program design."
]
} |
1412.0426 | 2090228888 | An undergraduate compilers course poses significant challenges to students, in both the conceptual richness of the major components and in the programming effort necessary to implement them. In this paper, I argue that a related architecture, the interpreter, serves as an effective conceptual framework in which to teach some of the later stages of the compiler pipeline. This framework can serve both to unify some of the major concepts that are taught in a typical undergraduate course and to structure the implementation of a semester-long compiler project. | Several undergraduate-level texts on programming language theory make the formal connection between a language's type system and its concrete semantics explicit (for example, Harper @cite_2 ). The correspondence between an interpreter and a type-checker is an easy consequence of this. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1760139041"
],
"abstract": [
"Types are the central organizing principle of the theory of programming languages. In this innovative book, Professor Robert Harper offers a fresh perspective on the fundamentals of these languages through the use of type theory. Whereas most textbooks on the subject emphasize taxonomy, Harper instead emphasizes genetics, examining the building blocks from which all programming languages are constructed. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that define its types, and its semantics is determined by the interactions among those constructs. The soundness of a language design - the absence of ill-defined programs - follows naturally. Professor Harper's presentation is simultaneously rigorous and intuitive, relying on only elementary mathematics. The framework he outlines scales easily to a rich variety of language concepts and is directly applicable to their implementation. The result is a lucid introduction to programming theory that is both accessible and practical."
]
} |
1412.0426 | 2090228888 | An undergraduate compilers course poses significant challenges to students, in both the conceptual richness of the major components and in the programming effort necessary to implement them. In this paper, I argue that a related architecture, the interpreter, serves as an effective conceptual framework in which to teach some of the later stages of the compiler pipeline. This framework can serve both to unify some of the major concepts that are taught in a typical undergraduate course and to structure the implementation of a semester-long compiler project. | Pagan proposes the inclusion of material on partial evaluation to derive a code generator from a language's interpreter @cite_14 . However, his work in that paper is more focused on the specialization of an interpreter for a program with respect to a file of known input values, and the way in which this can be used to generate a more efficient intermediate representation (Pascal source code). He does not address the correspondence between interpretation and semantic analysis. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2029065308"
],
"abstract": [
"A brief account of the concept of partial computation is given in the context of the Pascal language. The manual conversion of programs into generating extensions is explained using examples of gradually increasing complexity. This culminates in a readily applicable but too-little known technique of converting interpreters into compilers without dealing directly with machine language. Students taking courses in language processing should be taught this technique and perhaps also the general principles underlying it. A simple example of the application of the technique is presented."
]
} |
1412.0265 | 2103096501 | In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. | In this paper, we focus on the Riemannian manifold of SPD matrices and on the Grassmann manifold. SPD matrices find a variety of applications in computer vision @cite_19 . For instance, covariance region descriptors are used in object detection @cite_28 , texture classification @cite_35 @cite_2 , object tracking, action recognition and human recognition @cite_13 @cite_57 . Diffusion Tensor Imaging (DTI) was one of the pioneering fields for the development of non-linear algorithms on SPD matrices @cite_14 @cite_41 . In optical flow estimation and motion segmentation, structure tensors are often employed to encode important image features, such as texture and motion @cite_46 @cite_30 . Structure tensors have also been used in single image segmentation @cite_22 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_14",
"@cite_22",
"@cite_28",
"@cite_41",
"@cite_57",
"@cite_19",
"@cite_2",
"@cite_46",
"@cite_13"
],
"mid": [
"",
"2585165747",
"1983496390",
"2114816128",
"2116022929",
"",
"",
"78159342",
"",
"2096484739",
"2584085909"
],
"abstract": [
"",
"We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix.",
"Tensors are nowadays a common source of geometric information. In this paper, we propose to endow the tensor space with an affine-invariant Riemannian metric. We demonstrate that it leads to strong theoretical properties: the cone of positive definite symmetric matrices is replaced by a regular and complete manifold without boundaries (null eigenvalues are at the infinity), the geodesic between two tensors and the mean of a set of tensors are uniquely defined, etc. We have previously shown that the Riemannian metric provides a powerful framework for generalizing statistics to manifolds. In this paper, we show that it is also possible to generalize to tensor fields many important geometric data processing algorithms such as interpolation, filtering, diffusion and restoration of missing data. For instance, most interpolation and Gaussian filtering schemes can be tackled efficiently through a weighted mean computation. Linear and anisotropic diffusion schemes can be adapted to our Riemannian framework, through partial differential evolution equations, provided that the metric of the tensor space is taken into account. For that purpose, we provide intrinsic numerical schemes to compute the gradient and Laplace-Beltrami operators. Finally, to enforce the fidelity to the data (either sparsely distributed tensors or complete tensors fields) we propose least-squares criteria based on our invariant Riemannian distance which are particularly simple and efficient to solve.",
"This paper proposes a novel method to apply the standard graph cut technique to segmenting multimodal tensor valued images. The Riemannian nature of the tensor space is explicitly taken into account by first mapping the data to a Euclidean space where non-parametric kernel density estimates of the regional distributions may be calculated from user initialized regions. These distributions are then used as regional priors in calculating graph edge weights. Hence this approach utilizes the true variation of the tensor data by respecting its Riemannian structure in calculating distances when forming probability distributions. Further, the non-parametric model generalizes to arbitrary tensor distribution unlike the Gaussian assumption made in previous works. Casting the segmentation problem in a graph cut framework yields a segmentation robust with respect to initialization on the data tested.",
"We present a new algorithm to detect pedestrian in still images utilizing covariance matrices as object descriptors. Since the descriptors do not form a vector space, well known machine learning techniques are not well suited to learn the classifiers. The space of d-dimensional nonsingular covariance matrices can be represented as a connected Riemannian manifold. The main contribution of the paper is a novel approach for classifying points lying on a connected Riemannian manifold using the geometry of the space. The algorithm is tested on INRIA and DaimlerChrysler pedestrian datasets where superior detection rates are observed over the previous approaches.",
"",
"",
"Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and max-pooling that together with appropriate non-linearities lead to state-of-the-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster.",
"",
"We propose a novel algorithm for clustering data sampled from multiple submanifolds of a Riemannian manifold. First, we learn a representation of the data using generalizations of local nonlinear dimensionality reduction algorithms from Euclidean to Riemannian spaces. Such generalizations exploit geometric properties of the Riemannian space, particularly its Riemannian metric. Then, assuming that the data points from different groups are separated, we show that the null space of a matrix built from the local representation gives the segmentation of the data. Our method is computationally simple and performs automatic segmentation without requiring user initialization. We present results on 2-D motion segmentation and diffusion tensor imaging segmentation.",
""
]
} |
1412.0265 | 2103096501 | In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. | In recent years, several optimization algorithms have been proposed for Riemannian manifolds. In particular, LogitBoost for classification on Riemannian manifolds was introduced in @cite_28 . This algorithm has the drawbacks of approximating the manifold by tangent spaces and not scaling with the number of training samples due to the heavy use of exponential logarithmic maps to transit between the manifold and the tangent space, as well as of gradient descent based Karcher mean calculation. Here, our positive definite kernels enable us to use more efficient and accurate classification algorithms on manifolds without requiring tangent space approximations. Furthermore, as shown in @cite_5 @cite_56 , extending existing kernel-free, manifold-based binary classifiers to the multi-class case is not straightforward. In contrast, the kernel-based classifiers on manifolds described in this paper can readily be used in multi-class scenarios. | {
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_56"
],
"mid": [
"2116022929",
"1558610543",
""
],
"abstract": [
"We present a new algorithm to detect pedestrian in still images utilizing covariance matrices as object descriptors. Since the descriptors do not form a vector space, well known machine learning techniques are not well suited to learn the classifiers. The space of d-dimensional nonsingular covariance matrices can be represented as a connected Riemannian manifold. The main contribution of the paper is a novel approach for classifying points lying on a connected Riemannian manifold using the geometry of the space. The algorithm is tested on INRIA and DaimlerChrysler pedestrian datasets where superior detection rates are observed over the previous approaches.",
"In video surveillance, classification of visual data can be very hard, due to the scarce resolution and the noise characterizing the sensors' data. In this paper, we propose a novel feature, the ARray of COvariances (ARCO), and a multi-class classification framework operating on Riemannian manifolds. ARCO is composed by a structure of covariance matrices of image features, able to extract information from data at prohibitive low resolutions. The proposed classification framework consists in instantiating a new multi-class boosting method, working on the manifold Symd+ of symmetric positive definite d × d (covariance) matrices. As practical applications, we consider different surveillance tasks, such as head pose classification and pedestrian detection, providing novel state-of-the-art performances on standard datasets.",
""
]
} |
1412.0265 | 2103096501 | In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. | @cite_46 , dimensionality reduction and clustering methods were extended to manifolds by designing Riemannian versions of Laplacian Eigenmaps (LE), Locally Linear Embedding (LLE) and Hessian LLE (HLLE). Clustering was performed after mapping to a low dimensional space which does not necessarily preserve all the information in the original data. Instead, we use our kernels to perform clustering in a higher dimensional RKHS that embeds the manifold of interest. | {
"cite_N": [
"@cite_46"
],
"mid": [
"2096484739"
],
"abstract": [
"We propose a novel algorithm for clustering data sampled from multiple submanifolds of a Riemannian manifold. First, we learn a representation of the data using generalizations of local nonlinear dimensionality reduction algorithms from Euclidean to Riemannian spaces. Such generalizations exploit geometric properties of the Riemannian space, particularly its Riemannian metric. Then, assuming that the data points from different groups are separated, we show that the null space of a matrix built from the local representation gives the segmentation of the data. Our method is computationally simple and performs automatic segmentation without requiring user initialization. We present results on 2-D motion segmentation and diffusion tensor imaging segmentation."
]
} |
1412.0265 | 2103096501 | In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. | The use of kernels on SPD matrices has previously been advocated for locality preserving projections @cite_53 and sparse coding @cite_13 . In the first case, the kernel, derived from the affine-invariant distance, is not positive definite in general @cite_53 . In the second case, the kernel is positive definite only for some values of the Gaussian bandwidth parameter @math @cite_13 . For all kernel methods, the optimal choice of @math largely depends on the data distribution and hence constraints on @math are not desirable. Moreover, many popular automatic model selection methods require @math to be continuously variable @cite_25 . | {
"cite_N": [
"@cite_53",
"@cite_13",
"@cite_25"
],
"mid": [
"2030605635",
"2584085909",
"2158001550"
],
"abstract": [
"A convenient way of analysing Riemannian manifolds is to embed them in Euclidean spaces, with the embedding typically obtained by flattening the manifold via tangent spaces. This general approach is not free of drawbacks. For example, only distances between points to the tangent pole are equal to true geodesic distances. This is restrictive and may lead to inaccurate modelling. Instead of using tangent spaces, we propose embedding into the Reproducing Kernel Hilbert Space by introducing a Riemannian pseudo kernel. We furthermore propose to recast a locality preserving projection technique from Euclidean spaces to Riemannian manifolds, in order to demonstrate the benefits of the embedding. Experiments on several visual classification tasks (gesture recognition, person re-identification and texture classification) show that in comparison to tangent-based processing and state-of-the-art methods (such as tensor canonical correlation analysis), the proposed approach obtains considerable improvements in discrimination accuracy.",
"",
"The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVMs) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing parameters, based on exhaustive search become intractable as soon as the number of parameters exceeds two. Some experimental results assess the feasibility of our approach for a large number of parameters (more than 100) and demonstrate an improvement of generalization performance."
]
} |
1412.0265 | 2103096501 | In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. | Recently, mean-shift clustering with the heat kernel on Riemannian manifolds was introduced @cite_50 . However, due to the mathematical complexity of the kernel function, computing the exact kernel is not tractable and hence only an approximation of the true kernel was used. Parallel to our work, kernels on SPD matrices and on Grassmann manifolds were used in @cite_12 , albeit without explicit proof of their positive definiteness. In contrast, in this paper, we introduce a unified framework for analyzing the positive definiteness of the Gaussian kernel defined on any manifold and use this framework to identify provably positive definite kernels on the manifold of SPD matrices and on the Grassmann manifold. | {
"cite_N": [
"@cite_12",
"@cite_50"
],
"mid": [
"2141830256",
"157436748"
],
"abstract": [
"In computer vision applications, features often lie on Riemannian manifolds with known geometry. Popular learning algorithms such as discriminant analysis, partial least squares, support vector machines, etc., are not directly applicable to such features due to the non-Euclidean nature of the underlying spaces. Hence, classification is often performed in an extrinsic manner by mapping the manifolds to Euclidean spaces using kernels. However, for kernel based approaches, poor choice of kernel often results in reduced performance. In this paper, we address the issue of kernel selection for the classification of features that lie on Riemannian manifolds using the kernel learning approach. We propose two criteria for jointly learning the kernel and the classifier using a single optimization problem. Specifically, for the SVM classifier, we formulate the problem of learning a good kernel-classifier combination as a convex optimization problem and solve it efficiently following the multiple kernel learning approach. Experimental results on image set-based classification and activity recognition clearly demonstrate the superiority of the proposed approach over existing methods for classification of manifold features.",
"The original mean shift algorithm [1] on Euclidean spaces (MS) was extended in [2] to operate on general Riemannian manifolds. This extension is extrinsic (Ext-MS) since the mode seeking is performed on the tangent spaces [3], where the underlying curvature is not fully considered (tangent spaces are only valid in a small neighborhood). In [3] was proposed an intrinsic mean shift designed to operate on two particular Riemannian manifolds (IntGS-MS), i.e. Grassmann and Stiefel manifolds (using manifold-dedicated density kernels). It is then natural to ask whether mean shift could be intrinsically extended to work on a large class of manifolds. We propose a novel paradigm to intrinsically reformulate the mean shift on general Riemannian manifolds. This is accomplished by embedding the Riemannian manifold into a Reproducing Kernel Hilbert Space (RKHS) by using a general and mathematically well-founded Riemannian kernel function, i.e. heat kernel [5]. The key issue is that when the data is implicitly mapped to the Hilbert space, the curvature of the manifold is taken into account (i.e. exploits the underlying information of the data). The inherent optimization is then performed on the embedded space. Theoretic analysis and experimental results demonstrate the promise and effectiveness of this novel paradigm."
]
} |
1412.0265 | 2103096501 | In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. | It is important to note the difference between this work and manifold-learning methods such as @cite_51 . We work with data sampled from a manifold whose geometry is well known. In contrast, manifold-learning methods attempt to learn the structure of an underlying unknown manifold from data samples. Furthermore, those methods often assume that noise-free data samples lie on a manifold from which noise push them away. In our study, data points, regardless of their noise content, always lie on the mathematically well-defined manifold. | {
"cite_N": [
"@cite_51"
],
"mid": [
"1966949944"
],
"abstract": [
"We consider the general problem of utilizing both labeled and unlabeled data to improve classification accuracy. Under the assumption that the data lie on a submanifold in a high dimensional space, we develop an algorithmic framework to classify a partially labeled data set in a principled manner. The central idea of our approach is that classification functions are naturally defined only on the submanifold in question rather than the total ambient space. Using the Laplace-Beltrami operator one produces a basis (the Laplacian Eigenmaps) for a Hilbert space of square integrable functions on the submanifold. To recover such a basis, only unlabeled examples are required. Once such a basis is obtained, training can be performed using the labeled data set. Our algorithm models the manifold using the adjacency graph for the data and approximates the Laplace-Beltrami operator by the graph Laplacian. We provide details of the algorithm, its theoretical justification, and several practical applications for image, speech, and text classification."
]
} |
1411.7416 | 2160944939 | Mobile sensing has become a promising paradigm for mobile users to obtain information by task crowd- sourcing. However, due to the social preferences of mobile users, the quality of sensing reports may be impacted by the underlying social attributes and selfishness of individuals. Therefore, it is crucial to con- sider the social impacts and trustworthiness of mobile users when selecting task participants in mobile sensing. In this paper, we propose a Social Aware Crowdsourcing with Reputation Management (SACRM) scheme to select the well-suited participants and allocate the task rewards in mobile sensing. Specifically, we consider the social attributes, task delay and reputation in crowdsourcing and propose a participant selection scheme to choose the well-suited participants for the sensing task under a fixed task budget. A report assessment and rewarding scheme is also introduced to measure the quality of the sensing reports and allocate the task rewards based the assessed report quality. In addition, we develop a reputation management scheme to evaluate the trustworthiness and cost performance ratio of mobile users for par- ticipant selection. Theoretical analysis and extensive simulations demonstrate that SACRM can efficiently improve the crowdsourcing utility and effectively stimulate the participants to improve the quality of their sensing reports. | As an emerging information collection mechanism, crowdsourcing has been extensively studied in mobile sensing. Most of the related works focus on studying the incentive mechanisms to stimulate the participation of mobile users for crowdsourcing @cite_21 @cite_4 @cite_1 @cite_10 @cite_11 @cite_12 . | {
"cite_N": [
"@cite_4",
"@cite_21",
"@cite_1",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2105385568",
"1970756365",
"2058911993",
"2074053300",
"2058210123",
"2122005484"
],
"abstract": [
"Participatory sensing (PS) systems rely on the willingness of mobile users to participate in the collection and reporting of data using a variety of sensors either embedded or integrated in their cellular phones. However, this new data collection paradigm has not been very successful yet mainly because of the lack of incentives for participation. Although several incentive schemes have been proposed to encourage user participation, none has used location information and imposed budget and coverage constraints, which will make the scheme more realistic and efficient. We propose a recurrent reverse auction incentive mechanism with a greedy algorithm that selects a representative subset of the users according to their location given a fixed budget. Compared to existing mechanisms, our incentive scheme improves the area covered by more than 60 percent acquiring a more representative set of samples after every round while maintaining the same number of active users in the system and spending the same budget.",
"Mobile phone sensing is a new paradigm which takes advantage of the pervasive smartphones to collect and analyze data beyond the scale of what was previously possible. In a mobile phone sensing system, the platform recruits smartphone users to provide sensing service. Existing mobile phone sensing applications and systems lack good incentive mechanisms that can attract more user participation. To address this issue, we design incentive mechanisms for mobile phone sensing. We consider two system models: the platform-centric model where the platform provides a reward shared by participating users, and the user-centric model where users have more control over the payment they will receive. For the platform-centric model, we design an incentive mechanism using a Stackelberg game, where the platform is the leader while the users are the followers. We show how to compute the unique Stackelberg Equilibrium, at which the utility of the platform is maximized, and none of the users can improve its utility by unilaterally deviating from its current strategy. For the user-centric model, we design an auction-based incentive mechanism, which is computationally efficient, individually rational, profitable, and truthful. Through extensive simulations, we evaluate the performance and validate the theoretical properties of our incentive mechanisms.",
"User participation is one of the most important elements in participatory sensing application for providing adequate level of service quality. However, incentive mechanism and its economic model for user participation have been less addressed so far in this research domain. This paper studies the economic model of user participation incentive in participatory sensing applications. To stimulate user participation, we design and evaluate a novel reverse auction based dynamic pricing incentive mechanism where users can sell their sensing data to a service provider with users' claimed bid prices. The proposed incentive mechanism focuses on minimizing and stabilizing the incentive cost while maintaining adequate level of participants by preventing users from dropping out of participatory sensing applications. Compared with random selection based fixed pricing incentive mechanism, the proposed mechanism not only reduces the incentive cost for retaining the same number of participants but also improves the fairness of incentive distribution and social welfare. It also helps us to achieve the geographically balanced sensing measurements and, more importantly, can remove the burden of accurate price decision for user data that is the most difficult step in designing incentive mechanism.",
"Participatory sensing has emerged recently as a promising approach to large-scale data collection. However, without incentives for users to regularly contribute good quality data, this method is unlikely to be viable in the long run. In this paper, we link incentive to users' demand for consuming compelling services, as an approach complementary to conventional credit or reputation based approaches. With this demand-based principle, we design two incentive schemes, Incentive with Demand Fairness (IDF) and Iterative Tank Filling (ITF), for maximizing fairness and social welfare, respectively. Our study shows that the IDF scheme is max-min fair and can score close to 1 on the Jain's fairness index, while the ITF scheme maximizes social welfare and achieves a unique Nash equilibrium which is also Pareto and globally optimal. We adopted a game theoretic approach to derive the optimal service demands. Furthermore, to address practical considerations, we use a stochastic programming technique to handle uncertainty that is often encountered in real life situations.",
"The idea of social participatory sensing provides a substrate to benefit from friendship relations in recruiting a critical mass of participants willing to attend in a sensing campaign. However, the selection of suitable participants who are trustable and provide high quality contributions is challenging. In this paper, we propose a recruitment framework for social participatory sensing. Our framework leverages multi-hop friendship relations to identify and select suitable and trustworthy participants among friends or friends of friends, and finds the most trustable paths to them. The framework also includes a suggestion component which provides a cluster of suggested friends along with the path to them, which can be further used for recruitment or friendship establishment. Simulation results demonstrate the efficacy of our proposed recruitment framework in terms of selecting a large number of well-suited participants and providing contributions with high overall trust, in comparison with one-hop recruitment architecture.",
"Fueled by the widespread adoption of sensor-enabled smartphones, mobile crowdsourcing is an area of rapid innovation. Many crowd-powered sensor systems are now part of our daily life -- for example, providing highway congestion information. However, participation in these systems can easily expose users to a significant drain on already limited mobile battery resources. For instance, the energy burden of sampling certain sensors (such as WiFi or GPS) can quickly accumulate to levels users are unwilling to bear. Crowd system designers must minimize the negative energy side-effects of participation if they are to acquire and maintain large-scale user populations. To address this challenge, we propose Piggyback CrowdSensing (PCS), a system for collecting mobile sensor data from smartphones that lowers the energy overhead of user participation. Our approach is to collect sensor data by exploiting Smartphone App Opportunities -- that is, those times when smartphone users place phone calls or use applications. In these situations, the energy needed to sense is lowered because the phone need no longer be woken from an idle sleep state just to collect data. Similar savings are also possible when the phone either performs local sensor computation or uploads the data to the cloud. To efficiently use these sporadic opportunities, PCS builds a lightweight, user-specific prediction model of smartphone app usage. PCS uses this model to drive a decision engine that lets the smartphone locally decide which app opportunities to exploit based on expected energy quality trade-offs. We evaluate PCS by analyzing a large-scale dataset (containing 1,320 smartphone users) and building an end-to-end crowdsourcing application that constructs an indoor WiFi localization database. Our findings show that PCS can effectively collect large-scale mobile sensor datasets (e.g., accelerometer, GPS, audio, image) from users while using less energy (up to 90 depending on the scenario) compared to a representative collection of existing approaches."
]
} |
1411.7416 | 2160944939 | Mobile sensing has become a promising paradigm for mobile users to obtain information by task crowd- sourcing. However, due to the social preferences of mobile users, the quality of sensing reports may be impacted by the underlying social attributes and selfishness of individuals. Therefore, it is crucial to con- sider the social impacts and trustworthiness of mobile users when selecting task participants in mobile sensing. In this paper, we propose a Social Aware Crowdsourcing with Reputation Management (SACRM) scheme to select the well-suited participants and allocate the task rewards in mobile sensing. Specifically, we consider the social attributes, task delay and reputation in crowdsourcing and propose a participant selection scheme to choose the well-suited participants for the sensing task under a fixed task budget. A report assessment and rewarding scheme is also introduced to measure the quality of the sensing reports and allocate the task rewards based the assessed report quality. In addition, we develop a reputation management scheme to evaluate the trustworthiness and cost performance ratio of mobile users for par- ticipant selection. Theoretical analysis and extensive simulations demonstrate that SACRM can efficiently improve the crowdsourcing utility and effectively stimulate the participants to improve the quality of their sensing reports. | Dynamic pricing is an effective incentive mechanism widely used in mobile sensing @cite_21 @cite_4 @cite_1 @cite_19 . Yang et. al. @cite_21 propose two incentive mechanisms to stimulate mobile users' participation respectively for platform-centric and user-centric mobile sensing. For the platform-centric model, they present a Stackelberg game @cite_20 based incentive mechanism to maximize the utility of the platform. For the user-centric model, they design an auction-based incentive mechanism that is proved to be computationally efficient, individually rational, profitable and truthful. Jaimes et. al. @cite_4 propose a recurrent reverse auction incentive mechanism using a greedy algorithm to select a representative subset of users according to their locations under a fixed budget. @cite_1 , the authors develop and evaluate a reverse auction based dynamic pricing incentive mechanism to stimulate mobile users' participation and reduce the incentive cost. Besides the dynamic pricing mechanism, personal demand and social relationship are introduced into the incentive mechanism study @cite_10 @cite_0 @cite_12 . Luo et. al. @cite_10 link the incentive to personal demand for consuming compelling services. Based on the demand principle, two incentive schemes, called Incentive with Demand Fairness (IDF) and Iterative Tank Filling (ITF), are proposed to maximize fairness and social welfare, respectively. | {
"cite_N": [
"@cite_4",
"@cite_21",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_10",
"@cite_12",
"@cite_20"
],
"mid": [
"2105385568",
"1970756365",
"2058911993",
"2088940490",
"2005406065",
"2074053300",
"2058210123",
"2067064328"
],
"abstract": [
"Participatory sensing (PS) systems rely on the willingness of mobile users to participate in the collection and reporting of data using a variety of sensors either embedded or integrated in their cellular phones. However, this new data collection paradigm has not been very successful yet mainly because of the lack of incentives for participation. Although several incentive schemes have been proposed to encourage user participation, none has used location information and imposed budget and coverage constraints, which will make the scheme more realistic and efficient. We propose a recurrent reverse auction incentive mechanism with a greedy algorithm that selects a representative subset of the users according to their location given a fixed budget. Compared to existing mechanisms, our incentive scheme improves the area covered by more than 60 percent acquiring a more representative set of samples after every round while maintaining the same number of active users in the system and spending the same budget.",
"Mobile phone sensing is a new paradigm which takes advantage of the pervasive smartphones to collect and analyze data beyond the scale of what was previously possible. In a mobile phone sensing system, the platform recruits smartphone users to provide sensing service. Existing mobile phone sensing applications and systems lack good incentive mechanisms that can attract more user participation. To address this issue, we design incentive mechanisms for mobile phone sensing. We consider two system models: the platform-centric model where the platform provides a reward shared by participating users, and the user-centric model where users have more control over the payment they will receive. For the platform-centric model, we design an incentive mechanism using a Stackelberg game, where the platform is the leader while the users are the followers. We show how to compute the unique Stackelberg Equilibrium, at which the utility of the platform is maximized, and none of the users can improve its utility by unilaterally deviating from its current strategy. For the user-centric model, we design an auction-based incentive mechanism, which is computationally efficient, individually rational, profitable, and truthful. Through extensive simulations, we evaluate the performance and validate the theoretical properties of our incentive mechanisms.",
"User participation is one of the most important elements in participatory sensing application for providing adequate level of service quality. However, incentive mechanism and its economic model for user participation have been less addressed so far in this research domain. This paper studies the economic model of user participation incentive in participatory sensing applications. To stimulate user participation, we design and evaluate a novel reverse auction based dynamic pricing incentive mechanism where users can sell their sensing data to a service provider with users' claimed bid prices. The proposed incentive mechanism focuses on minimizing and stabilizing the incentive cost while maintaining adequate level of participants by preventing users from dropping out of participatory sensing applications. Compared with random selection based fixed pricing incentive mechanism, the proposed mechanism not only reduces the incentive cost for retaining the same number of participants but also improves the fairness of incentive distribution and social welfare. It also helps us to achieve the geographically balanced sensing measurements and, more importantly, can remove the burden of accurate price decision for user data that is the most difficult step in designing incentive mechanism.",
"This paper studies economic models of user participation incentive in participatory sensing applications. User participation is the most important element in participatory sensing applications for providing adequate level of service quality. However, incentive mechanism and its economic model for user participation have never been addressed so far in this research domain. In order to stimulate user participation, we design and evaluate a novel Reverse Auction based Dynamic Price (RADP) incentive mechanism, where users can sell their sensing data to a service provider with users' claimed bid prices. The proposed incentive mechanism focuses on minimizing and stabilizing incentive cost while maintaining adequate number of participants by preventing users from dropping out of participatory sensing applications. Compared with a Random Selection with Fixed Price (RSFP) incentive mechanism, the proposed mechanism not only reduces the incentive cost for retaining same number of participants by more than 60 but also improves the fairness of incentive distribution and social welfare. More importantly, RADP can remove burden of accurate pricing for user sensing data, the most difficult step in RSFP.",
"Participatory sensing has emerged as a novel paradigm for data collection and collective knowledge formation about a state or condition of interest, sometimes linked to a geographic area. In this paper, we address the problem of incentive mechanism design for data contributors for participatory sensing applications. The service provider receives service queries in an area from service requesters and initiates an auction for user participation. Upon request, each user reports its perceived cost per unit of amount of participation, which essentially maps to a requested amount of compensation for participation. The participation cost quantifies the dissatisfaction caused to user due to participation. This cost is considered to be private information for each device, as it strongly depends on various factors inherent to it, such as the energy cost for sensing, data processing and transmission to the closest point of wireless access, the residual battery level, the number of concurrent jobs at the device processor, the required bandwidth to transmit data and the related charges of the mobile network operator, or even the user discomfort due to manual effort to submit data. Hence, participants have strong motive to mis-report their cost, i.e. declare a higher cost that the actual one, so as to obtain higher payment. We seek a mechanism for user participation level determination and payment allocation which is most viable for the provider, that is, it minimizes the total cost of compensating participants, while delivering a certain quality of experience to service requesters. We cast the problem in the context of optimal reverse auction design, and we show how the different quality of submitted information by participants can be tracked by the service provider and used in the participation level and payment selection procedures. We derive a mechanism that optimally solves the problem above, and at the same time it is individually rational (i.e., it motivates users to participate) and incentive-compatible (i.e. it motivates truthful cost reporting by participants). Finally, a representative participatory sensing case study involving parameter estimation is presented, which exemplifies the incentive mechanism above.",
"Participatory sensing has emerged recently as a promising approach to large-scale data collection. However, without incentives for users to regularly contribute good quality data, this method is unlikely to be viable in the long run. In this paper, we link incentive to users' demand for consuming compelling services, as an approach complementary to conventional credit or reputation based approaches. With this demand-based principle, we design two incentive schemes, Incentive with Demand Fairness (IDF) and Iterative Tank Filling (ITF), for maximizing fairness and social welfare, respectively. Our study shows that the IDF scheme is max-min fair and can score close to 1 on the Jain's fairness index, while the ITF scheme maximizes social welfare and achieves a unique Nash equilibrium which is also Pareto and globally optimal. We adopted a game theoretic approach to derive the optimal service demands. Furthermore, to address practical considerations, we use a stochastic programming technique to handle uncertainty that is often encountered in real life situations.",
"The idea of social participatory sensing provides a substrate to benefit from friendship relations in recruiting a critical mass of participants willing to attend in a sensing campaign. However, the selection of suitable participants who are trustable and provide high quality contributions is challenging. In this paper, we propose a recruitment framework for social participatory sensing. Our framework leverages multi-hop friendship relations to identify and select suitable and trustworthy participants among friends or friends of friends, and finds the most trustable paths to them. The framework also includes a suggestion component which provides a cluster of suggested friends along with the path to them, which can be further used for recruitment or friendship establishment. Simulation results demonstrate the efficacy of our proposed recruitment framework in terms of selecting a large number of well-suited participants and providing contributions with high overall trust, in comparison with one-hop recruitment architecture.",
"This survey provides a structured and comprehensive overview of research on security and privacy in computer and communication networks that use game-theoretic approaches. We present a selected set of works to highlight the application of game theory in addressing different forms of security and privacy problems in computer networks and mobile applications. We organize the presented works in six main categories: security of the physical and MAC layers, security of self-organizing networks, intrusion detection systems, anonymity and privacy, economics of network security, and cryptography. In each category, we identify security problems, players, and game models. We summarize the main results of selected works, such as equilibrium analysis and security mechanism designs. In addition, we provide a discussion on the advantages, drawbacks, and future direction of using game theory in this field. In this survey, our goal is to instill in the reader an enhanced understanding of different research approaches in applying game-theoretic methods to network security. This survey can also help researchers from various fields develop game-theoretic solutions to current and emerging security problems in computer networking."
]
} |
1411.7416 | 2160944939 | Mobile sensing has become a promising paradigm for mobile users to obtain information by task crowd- sourcing. However, due to the social preferences of mobile users, the quality of sensing reports may be impacted by the underlying social attributes and selfishness of individuals. Therefore, it is crucial to con- sider the social impacts and trustworthiness of mobile users when selecting task participants in mobile sensing. In this paper, we propose a Social Aware Crowdsourcing with Reputation Management (SACRM) scheme to select the well-suited participants and allocate the task rewards in mobile sensing. Specifically, we consider the social attributes, task delay and reputation in crowdsourcing and propose a participant selection scheme to choose the well-suited participants for the sensing task under a fixed task budget. A report assessment and rewarding scheme is also introduced to measure the quality of the sensing reports and allocate the task rewards based the assessed report quality. In addition, we develop a reputation management scheme to evaluate the trustworthiness and cost performance ratio of mobile users for par- ticipant selection. Theoretical analysis and extensive simulations demonstrate that SACRM can efficiently improve the crowdsourcing utility and effectively stimulate the participants to improve the quality of their sensing reports. | The majority of the existing incentive mechanisms are beneficial to stimulate the user participation, however, data assessment and reputation management are desired and critical to evaluate the trustworthiness of sensing data and mobile users @cite_13 @cite_16 @cite_8 @cite_7 @cite_14 @cite_23 . Zhang et. al. @cite_13 propose a robust trajectory estimation strategy, called TrMCD, to alleviate the negative influence of abnormal crowdsourced user trajectories and identify the normal and abnormal users, as well as to mitigate the impact of the spatial unbalanced crowdsourced trajectories. Huang et. al. @cite_23 employ the Gompertz function @cite_29 to compute the device reputation score and evaluate the trustworthiness of the contributed data. Since the reputation scores associated with the specific contributions can be used to identify the participants, privacy issues are highlighted in the reputation system design of mobile sensing @cite_15 @cite_16 @cite_14 . Wang et. al. @cite_15 propose a privacy-preserving reputation framework to evaluate the trustiness of the sensing reports and the participants based on the blind signatures. Christin et. al. @cite_16 propose an anonymous reputation framework, called as IncogniSense, which generates periodic pseudonyms by blind signature and transfers reputation between these pseudonyms. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_29",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_13"
],
"mid": [
"1985702879",
"2035315383",
"2003376982",
"2054883928",
"2052880471",
"1994123014",
"2164146906",
"2166114007"
],
"abstract": [
"Social participatory sensing is a newly proposed paradigm that tries to address the limitations of participatory sensing by leveraging online social networks as an infrastructure. A critical issue in the success of this paradigm is to assure the trustworthiness of contributions provided by participants. In this paper, we propose an application-agnostic reputation framework for social participatory sensing systems. Our framework considers both the quality of contribution and the trustworthiness level of participant within the social network. These two aspects are then combined via a fuzzy inference system to arrive at a final trust rating for a contribution. A reputation score is also calculated for each participant as a resultant of the trust ratings assigned to him. We adopt the utilization of PageRank algorithm as the building block for our reputation module. Extensive simulations demonstrate the efficacy of our framework in achieving high overall trust and assigning accurate reputation scores.",
"Leveraging social networks as an underlying infrastructure for participatory sensing systems provides an effective means to have access to a reasonable number of participants, which is essential for the success of this new and exciting paradigm. Another important issue is assessing the quality of the contributions prepared by the participants, who are, on the other hand, the social network members. In this paper, we propose a trust framework for social participatory sensing systems. Our framework is aimed at quantifying the trustworthiness of contributions by considering the quality of the raw sensor readings contributed and the trustworthiness of the user contributing the sensor data, and combining them via a fuzzy inference engine to arrive at a final trust score for a contribution. It also assigns a reputation score to each user. Extensive simulations demonstrate the efficacy of our framework.",
"Participatory sensing is a revolutionary paradigm in which volunteers collect and share information from their local environment using mobile phones. Different from other participatory sensing application challenges who consider user privacy and data trustworthiness, we consider network trustworthiness problem namely Sybil attacks in participatory sensing. Sybil attacks focus on creating multiple online user identities called Sybil identities and try to achieve malicious results through these identities. In this paper, we proposed a Cloud based Trust Management Scheme (CbTMS) framework for detecting Sybil attacks in participatory sensing network. Our CbTMS was proposed for performing Sybil attack characteristic check and trustworthiness management system to verify coverage nodes in the participatory sensing. To verify the proposed framework, we are currently developing the proposed scheme on OMNeT++ network simulator in multiple scenarios to achieve Sybil identities detection in our simulation environment.",
"Mineralization of 14C-labeled tracers is a common way of studying the environmental fate of xenobiotics, but it can be difficult to extract relevant kinetic parameters from such experiments since complex kinetic functions or several kinetic functions may be needed to adequately describe large data sets. In this study, we suggest using a two-parameter, sigmoid Gompertz function for parametrizing mineralization curves. The function was applied to a data set of 252 normalized mineralization curves that represented the potential for degradation of the herbicide MCPA in three horizons of an agricultural soil. The Gompertz function fitted most of the normalized curves, and trends in the data set could be visualized by a scatter plot of the two Gompertz parameters (rate constant and time delay). For agricultural topsoil, we also tested the effect of the MCPA concentration on the mineralization kinetics. Reduced initial concentrations lead to shortened lag-phases, probably due to reduced need for bacterial growth...",
"Participatory sensing is a revolutionary new paradigm in which volunteers collect and share information from their local environment using mobile phones. The inherent openness of this platform makes it easy to contribute corrupted data. This paper proposes a novel reputation system that employs the Gompertz function for computing device reputation score as a reflection of the trustworthiness of the contributed data. We implement this system in the context of a participatory noise monitoring application and conduct extensive real-world experiments using Apple iPhones. Experimental results demonstrate that our scheme achieves three-fold improvement in comparison with the state-of-the-art Beta reputation scheme.",
"With the proliferation of sensor-embedded mobile computing devices, participatory sensing is becoming popular to collect information from and outsource tasks to participating users. These applications deal with a lot of personal information, e.g., users' identities and locations at a specific time. Therefore, we need to pay a deeper attention to privacy and anonymity. However, from a data consumer's point of view, we want to know the source of the sensing data, i.e., the identity of the sender, in order to evaluate how much the data can be trusted. “Anonymity” and “trust” are two conflicting objectives in participatory sensing networks, and there are no existing research efforts which investigated the possibility of achieving both of them at the same time. In this paper, we propose ARTSense, a framework to solve the problem of “trust without identity” in participatory sensing networks. Our solution consists of a privacy-preserving provenance model, a data trust assessment scheme and an anonymous reputation management protocol. We have shown that ARTSense achieves the anonymity and security requirements. Validations are done to show that we can capture the trust of information and reputation of participants accurately.",
"Reputation systems rate the contributions to participatory sensing campaigns from each user by associating a reputation score. The reputation scores are used to weed out incorrect sensor readings. However, an adversary can deanonmyize the users even when they use pseudonyms by linking the reputation scores associated with multiple contributions. Since the contributed readings are usually annotated with spatiotemporal information, this poses a serious breach of privacy for the users. In this paper, we address this privacy threat by proposing a framework called IncogniSense. Our system utilizes periodic pseudonyms generated using blind signature and relies on reputation transfer between these pseudonyms. The reputation transfer process has an inherent trade-off between anonymity protection and loss in reputation. We investigate by means of extensive simulations several reputation cloaking schemes that address this tradeoff in different ways. Our system is robust against reputation corruption and a prototype implementation demonstrates that the associated overheads are minimal.",
"Crowdsourcing-based mobile applications are becoming more and more prevalent in recent years, as smartphones equipped with various built-in sensors are proliferating rapidly. The large quantity of crowdsourced sensing data stimulates researchers to accomplish some tasks that used to be costly or impossible, yet the quality of the crowdsourced data, which is of great importance, has not received sufficient attention. In reality, the low-quality crowdsourced data are prone to containing outliers that may severely impair the crowdsourcing applications. Thus in this work, we conduct pioneer investigation considering crowdsourced data quality. Specifically, we focus on estimating user motion trajectory information, which plays an essential role in multiple crowdsourcing applications, such as indoor localization, context recognition, indoor navigation, etc. We resort to the family of robust statistics and design a robust trajectory estimation scheme, name TrMCD, which is capable of alleviating the negative influence of abnormal crowdsourced user trajectories, differentiating normal users from abnormal users, and overcoming the challenge brought by spatial unbalance of crowdsourced trajectories. Two real field experiments are conducted and the results show that TrMCD is robust and effective in estimating user motion trajectories and mapping fingerprints to physical locations."
]
} |
1411.7416 | 2160944939 | Mobile sensing has become a promising paradigm for mobile users to obtain information by task crowd- sourcing. However, due to the social preferences of mobile users, the quality of sensing reports may be impacted by the underlying social attributes and selfishness of individuals. Therefore, it is crucial to con- sider the social impacts and trustworthiness of mobile users when selecting task participants in mobile sensing. In this paper, we propose a Social Aware Crowdsourcing with Reputation Management (SACRM) scheme to select the well-suited participants and allocate the task rewards in mobile sensing. Specifically, we consider the social attributes, task delay and reputation in crowdsourcing and propose a participant selection scheme to choose the well-suited participants for the sensing task under a fixed task budget. A report assessment and rewarding scheme is also introduced to measure the quality of the sensing reports and allocate the task rewards based the assessed report quality. In addition, we develop a reputation management scheme to evaluate the trustworthiness and cost performance ratio of mobile users for par- ticipant selection. Theoretical analysis and extensive simulations demonstrate that SACRM can efficiently improve the crowdsourcing utility and effectively stimulate the participants to improve the quality of their sensing reports. | Recently, participant selection has been studied to achieve the optimal crowdsourcing utility @cite_26 @cite_12 . Reddy et. al. @cite_26 develop a recruitment framework to enable the data requester to identify well-suited participants for the sensing task based on geographic and temporal availability as well as the participant reputation. The proposed recruitment system approximately maximizes the coverage over a specific area and time period under a limited campaign budget with a greedy algorithm. Amintoosi et. al. @cite_12 propose a recruitment framework for social participatory sensing to identify and select suitable and trustworthy participants in the friend circle, by leveraging the multihop friendship relations. However, they do not consider the social attributes of mobile users and adaptive rewards allocation, which play a significant role in crowdsourcing design. | {
"cite_N": [
"@cite_26",
"@cite_12"
],
"mid": [
"1553085258",
"2058210123"
],
"abstract": [
"Mobile phones have evolved from devices that are just used for voice and text communication to platforms that are able to capture and transmit a range of data types (image, audio, and location). The adoption of these increasingly capable devices by society has enabled a potentially pervasive sensing paradigm - participatory sensing. A coordinated participatory sensing system engages individuals carrying mobile phones to explore phenomena of interest using in situ data collection. For participatory sensing to succeed, several technical challenges need to be solved. In this paper, we discuss one particular issue: developing a recruitment framework to enable organizers to identify well-suited participants for data collections based on geographic and temporal availability as well as participation habits. This recruitment system is evaluated through a series of pilot data collections where volunteers explored sustainable processes on a university campus.",
"The idea of social participatory sensing provides a substrate to benefit from friendship relations in recruiting a critical mass of participants willing to attend in a sensing campaign. However, the selection of suitable participants who are trustable and provide high quality contributions is challenging. In this paper, we propose a recruitment framework for social participatory sensing. Our framework leverages multi-hop friendship relations to identify and select suitable and trustworthy participants among friends or friends of friends, and finds the most trustable paths to them. The framework also includes a suggestion component which provides a cluster of suggested friends along with the path to them, which can be further used for recruitment or friendship establishment. Simulation results demonstrate the efficacy of our proposed recruitment framework in terms of selecting a large number of well-suited participants and providing contributions with high overall trust, in comparison with one-hop recruitment architecture."
]
} |
1411.7492 | 2951359971 | In this paper we give subexponential size hitting sets for bounded depth multilinear arithmetic formulas. Using the known relation between black-box PIT and lower bounds we obtain lower bounds for these models. For depth-3 multilinear formulas, of size @math , we give a hitting set of size @math . This implies a lower bound of @math for depth-3 multilinear formulas, for some explicit polynomial. For depth-4 multilinear formulas, of size @math , we give a hitting set of size @math . This implies a lower bound of @math for depth-4 multilinear formulas, for some explicit polynomial. A regular formula consists of alternating layers of @math gates, where all gates at layer @math have the same fan-in. We give a hitting set of size (roughly) @math , for regular depth- @math multilinear formulas of size @math , where @math . This result implies a lower bound of roughly @math for such formulas. We note that better lower bounds are known for these models, but also that none of these bounds was achieved via construction of a hitting set. Moreover, no lower bound that implies such PIT results, even in the white-box model, is currently known. Our results are combinatorial in nature and rely on reducing the underlying formula, first to a depth-4 formula, and then to a read-once algebraic branching program (from depth-3 formulas we go straight to read-once algebraic branching programs). | Lower bounds for the multilinear model were first proved by Nisan and Wigderson @cite_28 , who gave exponential lower bounds for depth- @math formulas. Raz first proved quasi-polynomial lower bounds for multilinear formulas computing the Determinant and Permanent polynomials @cite_38 and later gave a separation between multilnear @math and multilinear @math @cite_27 . Raz and Yehudayoff proved a lower bound of @math for depth- @math multilinear formulas. As in the general case, the depth reduction techniques of @cite_33 @cite_23 @cite_0 @cite_8 also work for multilinear formulas. Thus, proving a lower bound of the form @math for @math multilinear formulas, would imply a super-polynomial lower bound for multilinear circuits. Currently, the best lower bound for syntactic multilinear circuits is @math by Raz, Shpilka and Yehudayoff @cite_26 . | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_33",
"@cite_8",
"@cite_28",
"@cite_0",
"@cite_27",
"@cite_23"
],
"mid": [
"2023541349",
"2018202033",
"2081256023",
"",
"2171403864",
"2016576580",
"2110206454",
"2084050956"
],
"abstract": [
"An arithmetic formula is multilinear if the polynomial computed by each of its subformulas is multilinear. We prove that any multilinear arithmetic formula for the permanent or the determinant of an n × n matrix is of size super-polynomial in n. Previously, super-polynomial lower bounds were not known (for any explicit function) even for the special case of multilinear formulas of constant depth.",
"We construct an explicit polynomial @math , with coefficients in @math , such that the size of any syntactically multilinear arithmetic circuit computing @math is at least @math . The lower bound holds over any field.",
"It is shown that any multivariate polynomial of degree d that can be computed sequentially in C steps can be computed in parallel in @math steps using only @math processors.",
"",
"",
"In their paper on the ''chasm at depth four'', Agrawal and Vinay have shown that polynomials in m variables of degree O(m) which admit arithmetic circuits of size 2^o^(^m^) also admit arithmetic circuits of depth four and size 2^o^(^m^). This theorem shows that for problems such as arithmetic circuit lower bounds or black-box derandomization of identity testing, the case of depth four circuits is in a certain sense the general case. In this paper we show that smaller depth four circuits can be obtained if we start from polynomial size arithmetic circuits. For instance, we show that if the permanent of nxn matrices has circuits of size polynomial in n, then it also has depth 4 circuits of size n^O^(^n^l^o^g^n^). If the original circuit uses only integer constants of polynomial size, then the same is true for the resulting depth four circuit. These results have potential applications to lower bounds and deterministic identity testing, in particular for sums of products of sparse univariate polynomials. We also use our techniques to reprove two results on: -the existence of nontrivial boolean circuits of constant depth for languages in LOGCFL; -reduction to polylogarithmic depth for arithmetic circuits of polynomial size and polynomially bounded degree.",
"An arithmetic circuit or formula is multilinear if the polynomial computed at each of its wires is multilinear. We give an explicit polynomial f(x1,..., xn) with coeffi- cients in 0, 1 such that over any field: 1. f can be computed by a polynomial-size multilinear circuit of depth O(log 2 n).",
"We show that, over Q, if an n-variate polynomial of degree d = nO(1) is computable by an arithmetic circuit of size s (respectively by an arithmetic branching program of size s) then it can also be computed by a depth three circuit (i.e. a ΣΠΣ-circuit) of size exp(O(√(d log n log d log s))) (respectively of size exp(O(√(d log n log s))). In particular this yields a ΣΠΣ circuit of size exp(O(√(d log d))) computing the d × d determinant Detd. It also means that if we can prove a lower bound of exp(omega(√(d log d))) on the size of any ΣΠΣ-circuit computing the d × d permanent Permd then we get super polynomial lower bounds for the size of any arithmetic branching program computing Permd. We then give some further results pertaining to derandomizing polynomial identity testing and circuit lower bounds. The ΣΠΣ circuits that we construct have the property that (some of) the intermediate polynomials have degree much higher than d. Indeed such a counterintuitive construction is unavoidable - it is known that in any ΣΠΣ circuit C computing either Detd or Perm_d, if every multiplication gate has fanin at most d (or any constant multiple thereof) then C must have size at least exp(Ω(d))."
]
} |
1411.7492 | 2951359971 | In this paper we give subexponential size hitting sets for bounded depth multilinear arithmetic formulas. Using the known relation between black-box PIT and lower bounds we obtain lower bounds for these models. For depth-3 multilinear formulas, of size @math , we give a hitting set of size @math . This implies a lower bound of @math for depth-3 multilinear formulas, for some explicit polynomial. For depth-4 multilinear formulas, of size @math , we give a hitting set of size @math . This implies a lower bound of @math for depth-4 multilinear formulas, for some explicit polynomial. A regular formula consists of alternating layers of @math gates, where all gates at layer @math have the same fan-in. We give a hitting set of size (roughly) @math , for regular depth- @math multilinear formulas of size @math , where @math . This result implies a lower bound of roughly @math for such formulas. We note that better lower bounds are known for these models, but also that none of these bounds was achieved via construction of a hitting set. Moreover, no lower bound that implies such PIT results, even in the white-box model, is currently known. Our results are combinatorial in nature and rely on reducing the underlying formula, first to a depth-4 formula, and then to a read-once algebraic branching program (from depth-3 formulas we go straight to read-once algebraic branching programs). | Kayal, Saha and Saptharishi @cite_2 proved a quasi-polynomial lower bounds for regular formulas that have the additional condition that the syntactic degree of the formula is at most twice the degree of the output polynomial. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2069546133"
],
"abstract": [
"We consider arithmetic formulas consisting of alternating layers of addition (+) and multiplication (×) gates such that the fanin of all the gates in any fixed layer is the same. Such a formula Φ which additionally has the property that its formal syntactic degree is at most twice the (total) degree of its output polynomial, we refer to as a regular formula. As usual, we allow arbitrary constants from the underlying field F on the incoming edges to a + gate so that a + gate can in fact compute an arbitrary F-linear combination of its inputs. We show that there is an (n2 + 1)-variate polynomial of degree 2n in VNP such that any regular formula computing it must be of size at least nΩ(log n). Along the way, we examine depth four (ΣΠΣΠ) regular formulas wherein all multiplication gates in the layer adjacent to the inputs have fanin a and all multiplication gates in the layer adjacent to the output node have fanin b. We refer to such formulas as ΣΠ[b]ΣΠ[a]-formulas. We show that there exists an n2-variate polynomial of degree n in VNP such that any ΣΠ[O(√n)]ΣΠ[√n]-formula computing it must have top fan-in at least 2Ω(√n·log n). In comparison, Tavenas [Tav13] has recently shown that every nO(1)-variate polynomial of degree n in VP admits a ΣΠ[O(√n)]ΣΠ[√n]-formula of top fan-in 2O(√n·log n). This means that any further asymptotic improvement in our lower bound for such formulas (to say 2ω(√n log n)) will imply that VP is different from VNP."
]
} |
1411.7766 | 2949886837 | Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts. | Extracting hand-crafted features at pre-defined landmarks has become a standard step in attribute recognition @cite_30 @cite_6 @cite_24 @cite_13 . Kumar al @cite_29 extracted HOG-like features on various face regions to tackle attribute classification and face verification. To improve the discriminativeness of hand-crafted features given a specific task, Bourdev al @cite_10 built a three-level SVM system to extract higher-level information. Deep learning @cite_9 @cite_27 @cite_19 @cite_16 @cite_32 @cite_43 @cite_39 @cite_41 @cite_17 recently achieved great success in attribute prediction, due to their ability to learn compact and discriminative features. Razavian al @cite_9 and Donahue al @cite_40 demonstrated that off-the-shelf features learned by CNN of ImageNet @cite_2 can be effectively adapted to attribute classification. Zhang al @cite_18 showed that better performance can be achieved by ensembling learned features of multiple pose-normalized CNNs. The main drawback of these methods is that they rely on accurate landmark detection and pose estimation in both training and testing steps. Even though a recent work @cite_14 can perform automatic part localization during test, it still requires landmark annotations of the training data. | {
"cite_N": [
"@cite_30",
"@cite_41",
"@cite_29",
"@cite_43",
"@cite_2",
"@cite_10",
"@cite_18",
"@cite_39",
"@cite_17",
"@cite_32",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_40",
"@cite_16",
"@cite_14",
"@cite_9",
"@cite_24",
"@cite_13"
],
"mid": [
"2098411764",
"",
"",
"",
"1994488211",
"",
"2147414309",
"",
"",
"",
"",
"",
"",
"2953360861",
"",
"1899185266",
"2062118960",
"",
""
],
"abstract": [
"We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.",
"",
"",
"",
"Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"",
"We propose a method for inferring human attributes (such as gender, hair style, clothes style, expression, action) from images of people under large variation of viewpoint, pose, appearance, articulation and occlusion. Convolutional Neural Nets (CNN) have been shown to perform very well on large scale object recognition problems. In the context of attribute classification, however, the signal is often subtle and it may cover only a small part of the image, while the image is dominated by the effects of pose and viewpoint. Discounting for pose variation would require training on very large labeled datasets which are not presently available. Part-based models, such as poselets [4] and DPM [12] have been shown to perform well for this problem but they are limited by shallow low-level features. We propose a new method which combines part-based models and deep learning by training pose-normalized CNNs. We show substantial improvement vs. state-of-the-art methods on challenging attribute classification tasks in unconstrained settings. Experiments confirm that our method outperforms both the best part-based methods on this problem and conventional CNNs trained on the full bounding box of the person.",
"",
"",
"",
"",
"",
"",
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"",
"With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.",
"",
""
]
} |
1411.6447 | 2952429010 | Fine-grained classification is challenging because categories can only be discriminated by subtle and local differences. Variances in the pose, scale or rotation usually make the problem more difficult. Most fine-grained classification systems follow the pipeline of finding foreground object or object parts (where) to extract discriminative features (what). In this paper, we propose to apply visual attention to fine-grained classification task using deep neural network. Our pipeline integrates three types of attention: the bottom-up attention that propose candidate patches, the object-level top-down attention that selects relevant patches to a certain object, and the part-level top-down attention that localizes discriminative parts. We combine these attentions to train domain-specific deep nets, then use it to improve both the what and where aspects. Importantly, we avoid using expensive annotations like bounding box or part information from end-to-end. The weak supervision constraint makes our work easier to generalize. We have verified the effectiveness of the method on the subsets of ILSVRC2012 dataset and CUB200_2011 dataset. Our pipeline delivered significant improvements and achieved the best accuracy under the weakest supervision condition. The performance is competitive against other methods that rely on additional annotations. | Our work is also closely related to recently proposed object detection method (R-CNN) based on CNN feature @cite_18 . R-CNN works by first proposing thousands candidate bounding boxes for each image via some bottom-up attention model @cite_17 @cite_2 , then selecting the bounding boxes with high classification scores as detection results. Based on R-CNN, Zhang has proposed Part-based R-CNN @cite_12 to utilize deep convolutional network for part detection. | {
"cite_N": [
"@cite_18",
"@cite_2",
"@cite_12",
"@cite_17"
],
"mid": [
"2102605133",
"2010181071",
"",
"2088049833"
],
"abstract": [
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their corresponding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 × 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single laptop CPU) generates a small set of category-independent, high quality object windows, yielding 96.2 object detection rate (DR) with 1, 000 proposals. Increasing the numbers of proposals and color spaces for computing BING features, our performance can be further improved to 99.5 DR.",
"",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )."
]
} |
1411.6370 | 1767300630 | Explosive growth in data and availability of cheap computing resources have sparked increasing interest in Big learning, an emerging subfield that studies scalable machine learning algorithms, systems, and applications with Big Data. Bayesian methods represent one important class of statistic methods for machine learning, with substantial recent developments on adaptive, flexible and scalable Bayesian learning. This article provides a survey of the recent advances in Big learning with Bayesian methods, termed Big Bayesian Learning, including nonparametric Bayesian methods for adaptively inferring model complexity, regularized Bayesian inference for improving the flexibility via posterior regularization, and scalable algorithms and systems based on stochastic subsampling and distributed computing for dealing with large-scale applications. | Sparsity regularization has been very effective in controlling model complexity as well as identifying important factors with parsimonious estimates @cite_110 @cite_224 when learning in high-dimensional spaces. Though such regularized estimates can be viewed as finding the maximum a posteriori (MAP) estimates of a Bayesian model, they are not truly Bayesian. A Bayes estimator optimizes the Bayes risk, which is the expectation of a loss averaged over the posterior distribution. Furthermore, a Bayesian approach takes the uncertainty into consideration by inferring the entire posterior distribution, rather than a single point. Popular sparse Bayesian methods include those using spike-and-slab priors @cite_78 @cite_203 , those using adaptive shrinkage with heavily-tailed priors (e.g., a Laplace prior) @cite_186 , and the methods using model space search. We refer the readers to @cite_106 for a nice review. The recent work @cite_108 presents a Bayesian variable selection method with strong selection consistency. | {
"cite_N": [
"@cite_224",
"@cite_78",
"@cite_108",
"@cite_186",
"@cite_106",
"@cite_110",
"@cite_203"
],
"mid": [
"2952139899",
"1999974018",
"1981299323",
"1982652137",
"2049228615",
"2169103656",
"1969415786"
],
"abstract": [
"Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted @math -penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view.",
"Abstract This article is concerned with the selection of subsets of predictor variables in a linear regression model for the prediction of a dependent variable. It is based on a Bayesian approach, intended to be as objective as possible. A probability distribution is first assigned to the dependent variable through the specification of a family of prior distributions for the unknown parameters in the regression model. The method is not fully Bayesian, however, because the ultimate choice of prior distribution from this family is affected by the data. It is assumed that the predictors represent distinct observables; the corresponding regression coefficients are assigned independent prior distributions. For each regression coefficient subject to deletion from the model, the prior distribution is a mixture of a point mass at 0 and a diffuse uniform distribution elsewhere, that is, a “spike and slab” distribution. The random error component is assigned a normal distribution with mean 0 and standard deviation ...",
"We consider a Bayesian approach to variable selection in the presence of high dimensional covariates based on a hierarchical model that places prior distributions on the regression coefficients as well as on the model space. We adopt the well-known spike and slab Gaussian priors with a distinct feature, that is, the prior variances depend on the sample size through which appropriate shrinkage can be achieved. We show the strong selection consistency of the proposed method in the sense that the posterior probability of the true model converges to one even when the number of covariates grows nearly exponentially with the sample size. This is arguably the strongest selection consistency result that has been available in the Bayesian variable selection literature; yet the proposed method can be carried out through posterior sampling with a simple Gibbs sampler. Furthermore, we argue that the proposed method is asymptotically similar to model selection with the @math penalty. We also demonstrate through empirical work the fine performance of the proposed approach relative to some state of the art alternatives.",
"The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors. Gibbs sampling from this posterior is possible using an expanded hierarchy with conjugate normal priors for the regression parameters and independent exponential priors on their variances. A connection with the inverse-Gaussian distribution provides tractable full conditional distributions. The Bayesian Lasso provides interval estimates (Bayesian credible intervals) that can guide variable selection. Moreover, the structure of the hierarchical model provides both Bayesian and likelihood methods for selecting the Lasso parameter. Slight modifications lead to Bayesian versions of other Lasso-related estimation methods, including bridge regression and a robust variant.",
"The selection of variables in regression problems has occupied the minds of many statisticians. Several Bayesian variable selection methods have been developed, and we concentrate on the following methods: Kuo & Mallick, Gibbs Variable Selection (GVS), Stochastic Search Variable Selection (SSVS), adaptive shrinkage with Jefireys' prior or a Laplacian prior, and reversible jump MCMC. We review these methods, in the context of their difierent properties. We then implement the methods in BUGS, using both real and simulated data as examples, and investigate how the difierent methods perform in practice. Our results suggest that SSVS, reversible jump MCMC and adaptive shrinkage methods can all work well, but the choice of which method is better will depend on the priors that are used, and also on how they are implemented.",
"High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of pe- nalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief ac- count of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods.",
"Variable selection in the linear regression model takes many apparent faces from both frequentist and Bayesian standpoints. In this paper we introduce a variable selection method referred to as a rescaled spike and slab model. We study the importance of prior hierarchical specifications and draw connections to frequentist generalized ridge regression estimation. Specifically, we study the usefulness of continuous bimodal priors to model hypervariance parameters, and the effect scaling has on the posterior mean through its relationship to penalization. Several model selection strategies, some frequentist and some Bayesian in nature, are developed and studied theoretically. We demonstrate the importance of selective shrinkage for effective variable selection in terms of risk misclassification, and show this is achieved using the posterior from a rescaled spike and slab model. We also show how to verify a procedure's ability to reduce model uncertainty in finite samples using a specialized forward selection strategy. Using this tool, we illustrate the effectiveness of rescaled spike and slab models in reducing model uncertainty."
]
} |
1411.6370 | 1767300630 | Explosive growth in data and availability of cheap computing resources have sparked increasing interest in Big learning, an emerging subfield that studies scalable machine learning algorithms, systems, and applications with Big Data. Bayesian methods represent one important class of statistic methods for machine learning, with substantial recent developments on adaptive, flexible and scalable Bayesian learning. This article provides a survey of the recent advances in Big learning with Bayesian methods, termed Big Bayesian Learning, including nonparametric Bayesian methods for adaptively inferring model complexity, regularized Bayesian inference for improving the flexibility via posterior regularization, and scalable algorithms and systems based on stochastic subsampling and distributed computing for dealing with large-scale applications. | Bayesian optimization (BO) @cite_54 aims to optimize some objective, which may be expensive to evaluate or do not have easily available derivatives, with successful applications in robotics, planning, recommendation, advertising, and automatic algorithm configuration. In Big data era, the learning models are becoming incredibly huge @cite_43 and the learning algorithms often have tuning parameters (e.g., the learning rates of SGD algorithms). Manually tuning such hyper-parameters is often prohibitive. Recent progress has been made on practical BO methods to automatically select good parameters based on Gaussian process using a multi-core parallel Monte Carlo algorithm @cite_111 or a stochastic variational inference method @cite_120 . To deal with the challenge of learning in high dimensional spaces, the work @cite_40 presents a random embedding BO algorithm. | {
"cite_N": [
"@cite_54",
"@cite_120",
"@cite_43",
"@cite_40",
"@cite_111"
],
"mid": [
"2950338507",
"1579064899",
"2950789693",
"1871676304",
"2950182411"
],
"abstract": [
"",
"We introduce a means of automating machine learning (ML) for big data tasks, by performing scalable stochastic Bayesian optimisation of ML algorithm parameters and hyper-parameters. More often than not, the critical tuning of ML algorithm parameters has relied on domain expertise from experts, along with laborious hand-tuning, brute search or lengthy sampling runs. Against this background, Bayesian optimisation is finding increasing use in automating parameter tuning, making ML algorithms accessible even to non-experts. However, the state of the art in Bayesian optimisation is incapable of scaling to the large number of evaluations of algorithm performance required to fit realistic models to complex, big data. We here describe a stochastic, sparse, Bayesian optimisation strategy to solve this problem, using many thousands of noisy evaluations of algorithm performance on subsets of data in order to effectively train algorithms for big data. We provide a comprehensive benchmarking of possible sparsification strategies for Bayesian optimisation, concluding that a Nystrom approximation offers the best scaling and performance for real tasks. Our proposed algorithm demonstrates substantial improvement over the state of the art in tuning the parameters of a Gaussian Process time series prediction task on real, big data.",
"We consider the problem of building high- level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 bil- lion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a clus- ter with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental re- sults reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bod- ies. Starting with these learned features, we trained our network to obtain 15.8 accu- racy in recognizing 20,000 object categories from ImageNet, a leap of 70 relative im- provement over the previous state-of-the-art.",
"Bayesian optimization techniques have been successfully applied to robotics, planning, sensor placement, recommendation, advertising, intelligent user interfaces and automatic algorithm configuration. Despite these successes, the approach is restricted to problems of moderate dimension, and several workshops on Bayesian optimization have identified its scaling to high dimensions as one of the holy grails of the field. In this paper, we introduce a novel random embedding idea to attack this problem. The resulting Random EMbedding Bayesian Optimization (REMBO) algorithm is very simple and applies to domains with both categorical and continuous variables. The experiments demonstrate that REMBO can effectively solve high-dimensional problems, including automatic parameter configuration of a popular mixed integer linear programming solver.",
"Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a \"black art\" that requires expert experience, unwritten rules of thumb, or sometimes brute-force search. Much more appealing is the idea of developing automatic approaches which can optimize the performance of a given learning algorithm to the task at hand. In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). The tractable posterior distribution induced by the GP leads to efficient use of the information gathered by previous experiments, enabling optimal choices about what parameters to try next. Here we show how the effects of the Gaussian process prior and the associated inference procedure can have a large impact on the success or failure of Bayesian optimization. We show that thoughtful choices can lead to results that exceed expert-level performance in tuning machine learning algorithms. We also describe new algorithms that take into account the variable cost (duration) of learning experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization on a diverse set of contemporary algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks."
]
} |
1411.6370 | 1767300630 | Explosive growth in data and availability of cheap computing resources have sparked increasing interest in Big learning, an emerging subfield that studies scalable machine learning algorithms, systems, and applications with Big Data. Bayesian methods represent one important class of statistic methods for machine learning, with substantial recent developments on adaptive, flexible and scalable Bayesian learning. This article provides a survey of the recent advances in Big learning with Bayesian methods, termed Big Bayesian Learning, including nonparametric Bayesian methods for adaptively inferring model complexity, regularized Bayesian inference for improving the flexibility via posterior regularization, and scalable algorithms and systems based on stochastic subsampling and distributed computing for dealing with large-scale applications. | Small variance asymptotics (SVA) conjoins Bayesian nonparametrics with optimization. It setups conceptual links between probabilistic and non-probabilistic models and derives new algorithms that can be simple and scalable. For example, a Gaussian mixture model (GMMs) reduces to K-means when the likelihood variance goes to zero; and similarly the probabilistic PCA reduces to a standard PCA by letting the covariance of the likelihood in pPCA approach zero @cite_205 . Recent progress has been made on deriving new computational methods by performing SVA analysis to Bayesian nonparametric models. For example, DP-means is an extension of K-means for nonparametric inference by doing SVA analysis to DP mixtures @cite_175 @cite_240 . Similar analysis has been done for hidden Markov models @cite_241 , latent feature models @cite_18 , and DP mixtures of SVMs @cite_142 @cite_67 which perform clustering and classification in a joint framework. Note that the progress on small-variance techniques is orthogonal to the advances in big learning. For example, DP-Means has been scaled to deal with massive data using distributed computing @cite_79 . | {
"cite_N": [
"@cite_18",
"@cite_67",
"@cite_241",
"@cite_175",
"@cite_240",
"@cite_205",
"@cite_79",
"@cite_142"
],
"mid": [
"",
"",
"2112321969",
"2951424696",
"2163281959",
"2136111243",
"2953360824",
"2402257362"
],
"abstract": [
"",
"",
"Small-variance asymptotics provide an emerging technique for obtaining scalable combinatorial algorithms from rich probabilistic models. We present a small-variance asymptotic analysis of the Hidden Markov Model and its infinite-state Bayesian nonparametric extension. Starting with the standard HMM, we first derive a \"hard\" inference algorithm analogous to k-means that arises when particular variances in the model tend to zero. This analysis is then extended to the Bayesian nonparametric case, yielding a simple, scalable, and flexible algorithm for discrete-state sequence data with a non-fixed number of states. We also derive the corresponding combinatorial objective functions arising from our analysis, which involve a k-means-like term along with penalties based on state transitions and the number of states. A key property of such algorithms is that— particularly in the nonparametric setting—standard probabilistic inference algorithms lack scalability and are heavily dependent on good initialization. A number of results on synthetic and real data sets demonstrate the advantages of the proposed framework.",
"Bayesian models offer great flexibility for clustering applications---Bayesian nonparametrics can be used for modeling infinite mixtures, and hierarchical Bayesian models can be utilized for sharing clusters across multiple data sets. For the most part, such flexibility is lacking in classical clustering methods such as k-means. In this paper, we revisit the k-means clustering algorithm from a Bayesian nonparametric viewpoint. Inspired by the asymptotic connection between k-means and mixtures of Gaussians, we show that a Gibbs sampling algorithm for the Dirichlet process mixture approaches a hard clustering algorithm in the limit, and further that the resulting algorithm monotonically minimizes an elegant underlying k-means-like clustering objective that includes a penalty for the number of clusters. We generalize this analysis to the case of clustering multiple data sets through a similar asymptotic argument with the hierarchical Dirichlet process. We also discuss further extensions that highlight the benefits of our analysis: i) a spectral relaxation involving thresholded eigenvectors, and ii) a normalized cut graph clustering algorithm that does not fix the number of clusters in the graph.",
"Sampling and variational inference techniques are two standard methods for inference in probabilistic models, but for many problems, neither approach scales effectively to large-scale data. An alternative is to relax the probabilistic model into a non-probabilistic formulation which has a scalable associated algorithm. This can often be fulfilled by performing small-variance asymptotics, i.e., letting the variance of particular distributions in the model go to zero. For instance, in the context of clustering, such an approach yields connections between the k-means and EM algorithms. In this paper, we explore small-variance asymptotics for exponential family Dirichlet process (DP) and hierarchical Dirichlet process (HDP) mixture models. Utilizing connections between exponential family distributions and Bregman divergences, we derive novel clustering algorithms from the asymptotic limit of the DP and HDP mixtures that features the scalability of existing hard clustering methods as well as the flexibility of Bayesian nonparametric models. We focus on special cases of our analysis for discrete-data problems, including topic modeling, and we demonstrate the utility of our results by applying variants of our algorithms to problems arising in vision and document analysis.",
"Summarising a high dimensional data set with a low dimensional embedding is a standard approach for exploring its structure. In this paper we provide an overview of some existing techniques for discovering such embeddings. We then introduce a novel probabilistic interpretation of principal component analysis (PCA) that we term dual probabilistic PCA (DPPCA). The DPPCA model has the additional advantage that the linear mappings from the embedded space can easily be non-linearised through Gaussian processes. We refer to this model as a Gaussian process latent variable model (GP-LVM). Through analysis of the GP-LVM objective function, we relate the model to popular spectral techniques such as kernel PCA and multidimensional scaling. We then review a practical algorithm for GP-LVMs in the context of large data sets and develop it to also handle discrete valued data and missing attributes. We demonstrate the model on a range of real-world and artificially generated data sets.",
"Research on distributed machine learning algorithms has focused primarily on one of two extremes - algorithms that obey strict concurrency constraints or algorithms that obey few or no such constraints. We consider an intermediate alternative in which algorithms optimistically assume that conflicts are unlikely and if conflicts do arise a conflict-resolution protocol is invoked. We view this \"optimistic concurrency control\" paradigm as particularly appropriate for large-scale machine learning algorithms, particularly in the unsupervised setting. We demonstrate our approach in three problem areas: clustering, feature learning and online facility location. We evaluate our methods via large-scale experiments in a cluster computing environment.",
"Infinite SVM (iSVM) is a Dirichlet process (DP) mixture of large-margin classifiers. Though flexible in learning nonlinear classifiers and discovering latent clustering structures, iSVM has a difficult inference task and existing methods could hinder its applicability to large-scale problems. This paper presents a small-variance asymptotic analysis to derive a simple and efficient algorithm, which monotonically optimizes a maxmargin DP-means (M2DPM) problem, an extension of DP-means for both predictive learning and descriptive clustering. Our analysis is built on Gibbs infinite SVMs, an alternative DP mixture of large-margin machines, which admits a partially collapsed Gibbs sampler without truncation by exploring data augmentation techniques. Experimental results show that M2DPM runs much faster than similar algorithms without sacrificing prediction accuracies."
]
} |
1411.6235 | 1839676477 | Clustering is an effective technique in data mining to generate groups that are the matter of interest. Among various clustering approaches, the family of k-means algorithms and min-cut algorithms gain most popularity due to their simplicity and efficacy. The classical k-means algorithm partitions a number of data points into several subsets by iteratively updating the clustering centers and the associated data points. By contrast, a weighted undirected graph is constructed in min-cut algorithms which partition the vertices of the graph into two sets. However, existing clustering algorithms tend to cluster minority of data points into a subset, which shall be avoided when the target dataset is balanced. To achieve more accurate clustering for balanced dataset, we propose to leverage exclusive lasso on k-means and min-cut to regulate the balance degree of the clustering results. By optimizing our objective functions that build atop the exclusive lasso, we can make the clustering result as much balanced as possible. Extensive experiments on several large-scale datasets validate the advantage of the proposed algorithms compared to the state-of-the-art clustering algorithms. | Previous work @cite_11 has shown that @math -orthogonal non-negative matrix factorization (NMF) is equivalent to relaxed @math -means clustering. Thus, @math -means clustering can be reformulated using the clustering indicator as follows: | {
"cite_N": [
"@cite_11"
],
"mid": [
"2103660993"
],
"abstract": [
"K-means, a simple and effective clustering algorithm, is one of the most widely used algorithms in computer vision community. Traditional k-means is an iterative algorithm — in each iteration new cluster centers are computed and each data point is re-assigned to its nearest center. The cluster re-assignment step becomes prohibitively expensive when the number of data points and cluster centers are large. In this paper, we propose a novel approximate k-means algorithm to greatly reduce the computational complexity in the assignment step. Our approach is motivated by the observation that most active points changing their cluster assignments at each iteration are located on or near cluster boundaries. The idea is to efficiently identify those active points by pre-assembling the data into groups of neighboring points using multiple random spatial partition trees, and to use the neighborhood information to construct a closure for each cluster, in such a way only a small number of cluster candidates need to be considered when assigning a data point to its nearest cluster. Using complexity analysis, real data clustering, and applications to image retrieval, we show that our approach out-performs state-of-the-art approximate k-means algorithms in terms of clustering quality and efficiency."
]
} |
1411.6235 | 1839676477 | Clustering is an effective technique in data mining to generate groups that are the matter of interest. Among various clustering approaches, the family of k-means algorithms and min-cut algorithms gain most popularity due to their simplicity and efficacy. The classical k-means algorithm partitions a number of data points into several subsets by iteratively updating the clustering centers and the associated data points. By contrast, a weighted undirected graph is constructed in min-cut algorithms which partition the vertices of the graph into two sets. However, existing clustering algorithms tend to cluster minority of data points into a subset, which shall be avoided when the target dataset is balanced. To achieve more accurate clustering for balanced dataset, we propose to leverage exclusive lasso on k-means and min-cut to regulate the balance degree of the clustering results. By optimizing our objective functions that build atop the exclusive lasso, we can make the clustering result as much balanced as possible. Extensive experiments on several large-scale datasets validate the advantage of the proposed algorithms compared to the state-of-the-art clustering algorithms. | In the literature, the classical @math -means and its variants have been applied to many data mining applications. For example, Mehrdad @cite_11 propose a harmony @math -means (HKM) algorithm based on harmony search optimization method and applied it to document clustering. HKM can be proved by means of finite Markov chain theory to converge to the global optimum. Zhang @cite_2 propose a new neighborhood density method for selecting initial cluster centers for @math -means clustering. Deepak @cite_3 employ quantization schemes to retain the outcome of clustering operations. Although these methods get good performance, they have not considered how to achieve balanced clustering result when the given data points are evenly distributed. By contrast, we aim to develop a balanced @math -means clustering algorithm that well addresses this issue. | {
"cite_N": [
"@cite_2",
"@cite_3",
"@cite_11"
],
"mid": [
"1970820654",
"2125510295",
"2103660993"
],
"abstract": [
"In this paper we present a new clustering method based on k-means that have avoided alternative randomness of initial center. This paper focused on K-means algorithm to the initial value of the dependence of k selected from the aspects of the algorithm is improved. First,the initial clustering number is. Second, through the application of the sub-merger strategy the categories were combined.The algorithm does not require the user is given in advance the number of cluster. Experiments on synthetic datasets are presented to have shown significant improvements in clustering accuracy in comparison with the random k-means.",
"This work examines under what conditions compression methodologies can retain the outcome of clustering operations. We focus on the popular k-Means clustering algorithm and we demonstrate how a properly constructed compression scheme based on post-clustering quantization is capable of maintaining the global cluster structure. Our analytical derivations indicate that a 1-bit moment preserving quantizer per cluster is sufficient to retain the original data clusters. Merits of the proposed compression technique include: a) reduced storage requirements with clustering guarantees, b) data privacy on the original values, and c) shape preservation for data visualization purposes. We evaluate quantization scheme on various high-dimensional datasets, including 1-dimensional and 2-dimensional time-series (shape datasets) and demonstrate the cluster preservation property. We also compare with previously proposed simplification techniques in the time-series area and show significant improvements both on the clustering and shape preservation of the compressed datasets.",
"K-means, a simple and effective clustering algorithm, is one of the most widely used algorithms in computer vision community. Traditional k-means is an iterative algorithm — in each iteration new cluster centers are computed and each data point is re-assigned to its nearest center. The cluster re-assignment step becomes prohibitively expensive when the number of data points and cluster centers are large. In this paper, we propose a novel approximate k-means algorithm to greatly reduce the computational complexity in the assignment step. Our approach is motivated by the observation that most active points changing their cluster assignments at each iteration are located on or near cluster boundaries. The idea is to efficiently identify those active points by pre-assembling the data into groups of neighboring points using multiple random spatial partition trees, and to use the neighborhood information to construct a closure for each cluster, in such a way only a small number of cluster candidates need to be considered when assigning a data point to its nearest cluster. Using complexity analysis, real data clustering, and applications to image retrieval, we show that our approach out-performs state-of-the-art approximate k-means algorithms in terms of clustering quality and efficiency."
]
} |
1411.6235 | 1839676477 | Clustering is an effective technique in data mining to generate groups that are the matter of interest. Among various clustering approaches, the family of k-means algorithms and min-cut algorithms gain most popularity due to their simplicity and efficacy. The classical k-means algorithm partitions a number of data points into several subsets by iteratively updating the clustering centers and the associated data points. By contrast, a weighted undirected graph is constructed in min-cut algorithms which partition the vertices of the graph into two sets. However, existing clustering algorithms tend to cluster minority of data points into a subset, which shall be avoided when the target dataset is balanced. To achieve more accurate clustering for balanced dataset, we propose to leverage exclusive lasso on k-means and min-cut to regulate the balance degree of the clustering results. By optimizing our objective functions that build atop the exclusive lasso, we can make the clustering result as much balanced as possible. Extensive experiments on several large-scale datasets validate the advantage of the proposed algorithms compared to the state-of-the-art clustering algorithms. | Zhou propose the exclusive lasso to model the scenario when variables in the same group compete with each other. They apply it to multi-task feature selection and obtain good performance. The exclusive lasso @cite_6 is defined as follows: | {
"cite_N": [
"@cite_6"
],
"mid": [
"2144567071"
],
"abstract": [
"We propose a novel group regularization which we call exclusive lasso. Unlike the group lasso regularizer that assumes covarying variables in groups, the proposed exclusive lasso regularizer models the scenario when variables in the same group compete with each other. Analysis is presented to illustrate the properties of the proposed regularizer. We present a framework of kernel based multi-task feature selection algorithm based on the proposed exclusive lasso regularizer. An efficient algorithm is derived to solve the related optimization problem. Experiments with document categorization show that our approach outperforms state-of-theart algorithms for multi-task feature selection."
]
} |
1411.6235 | 1839676477 | Clustering is an effective technique in data mining to generate groups that are the matter of interest. Among various clustering approaches, the family of k-means algorithms and min-cut algorithms gain most popularity due to their simplicity and efficacy. The classical k-means algorithm partitions a number of data points into several subsets by iteratively updating the clustering centers and the associated data points. By contrast, a weighted undirected graph is constructed in min-cut algorithms which partition the vertices of the graph into two sets. However, existing clustering algorithms tend to cluster minority of data points into a subset, which shall be avoided when the target dataset is balanced. To achieve more accurate clustering for balanced dataset, we propose to leverage exclusive lasso on k-means and min-cut to regulate the balance degree of the clustering results. By optimizing our objective functions that build atop the exclusive lasso, we can make the clustering result as much balanced as possible. Extensive experiments on several large-scale datasets validate the advantage of the proposed algorithms compared to the state-of-the-art clustering algorithms. | In @cite_6 , the regularizer introduces an @math -norm to combine the weights for the same category used by different data points and an @math -norm to combine the weights of different categories. Since @math -norm tends to achieve a sparse solution, the construction in the exclusive lasso essentially introduces a competition among different categories for the same data points. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2144567071"
],
"abstract": [
"We propose a novel group regularization which we call exclusive lasso. Unlike the group lasso regularizer that assumes covarying variables in groups, the proposed exclusive lasso regularizer models the scenario when variables in the same group compete with each other. Analysis is presented to illustrate the properties of the proposed regularizer. We present a framework of kernel based multi-task feature selection algorithm based on the proposed exclusive lasso regularizer. An efficient algorithm is derived to solve the related optimization problem. Experiments with document categorization show that our approach outperforms state-of-theart algorithms for multi-task feature selection."
]
} |
1411.6685 | 2952242212 | Recent experimental studies confirm the prevalence of the widely known performance anomaly problem in current Wi-Fi networks, and report on the severe network utility degradation caused by this phenomenon. Although a large body of work addressed this issue, we attribute the refusal of prior solutions to their poor implementation feasibility with off-the-shelf hardware and their imprecise modelling of the 802.11 protocol. Their applicability is further challenged today by very high throughput enhancements (802.11n ac) whereby link speeds can vary by two orders of magnitude. Unlike earlier approaches, in this paper we introduce the first rigorous analytical model of 802.11 stations' throughput and airtime in multi-rate settings, without sacrificing accuracy for tractability. We use the proportional-fair allocation criterion to formulate network utility maximisation as a convex optimisation problem for which we give a closed-form solution. We present a fully functional light-weight implementation of our scheme on commodity access points and evaluate this extensively via experiments in a real deployment, over a broad range of network conditions. Results demonstrate that our proposal achieves up to 100 utility gains, can double video streaming goodput and reduces TCP download times by 8x. | Recent experimental studies provide substantial evidence of the prevalence and severity of the rate anomaly problem in current Wi-Fi deployments (see e.g. @cite_8 @cite_30 ). The issue persists despite the large body of work conducted in this space since Heusse first analysed this behaviour @cite_2 . We attribute this to the poor implementation feasibility of prior approaches addressing proportional fairness with off-the-shelf hardware and their inaccurate modelling of the 802.11 protocol, shortcomings that we particularly tackle in this paper. Here we briefly summarise the most relevant research efforts and highlight the key advantages of our proposal. * 0.5em | {
"cite_N": [
"@cite_30",
"@cite_2",
"@cite_8"
],
"mid": [
"2120239653",
"1489058467",
"2064675408"
],
"abstract": [
"WiFi-based wireless LANs (WLANs) are widely used for Internet access. They were designed such that an Access Points (AP) serves few associated clients with symmetric uplink downlink traffic patterns. Usage of WiFi hotspots in locations such as airports and large conventions frequently experience poor performance in terms of downlink goodput and responsiveness. We study the various factors responsible for this performance degradation. We analyse and emulate a large conference network environment on our testbed with 45 nodes. We find that presence of asymmetry between the uplink downlink traffic results in backlogged packets at WiFi Access Point's (AP's) transmission queue and subsequent packet losses. This traffic asymmetry results in maximum performance loss for such an environment along with degradation due to rate diversity, fairness and TCP behaviour. We propose our solution WiFox, which (1) adaptively prioritizes AP's channel access over competing STAs avoiding traffic asymmetry (2) provides a fairness framework alleviating the problem of performance loss due to rate-diversity fairness and (3) avoids degradation due to TCP behaviour. We demonstrate that WiFox not only improves downlink goodput by 400-700 but also reduces request's average response time by 30-40 .",
"The performance of the IEEE 802.11b wireless local area networks is analyzed. We have observed that when some mobile hosts use a lower bit rate than the others, the performance of all hosts is considerably degraded. Such a situation is a common case in wireless local area networks in which a host far away from an access point is subject to important signal fading and interference. To cope with this problem, the host changes its modulation type, which degrades its bit rate to some lower value. Typically, 802.11b products degrade the bit rate from 11 Mb s to 5.5, 2, or 1 Mb s when repeated unsuccessful frame transmissions are detected. In such a case, a host transmitting for example at 1 Mb s reduces the throughput of all other hosts transmitting at 11 Mb s to a low value below 1 Mb s. The basic CSMA CA channel access method is at the root of this anomaly: it guarantees an equal long term channel access probability to all hosts. When one host captures the channel for a long time because its bit rate is low, it penalizes other hosts that use the higher rate. We analyze the anomaly theoretically by deriving simple expressions for the useful throughput, validate them by means of simulation, and compare with several performance measurements.",
"We present a measurement study of wireless experience in a diverse set of home environments by deploying an infrastructure, we call WiSe. Our infrastructure consists of OpenWrt-based Access Points (APs) that have been given away to residents for free to be installed as their primary wireless access mechanism. These APs are configured with our specialized measurement and monitoring software that communicates with our measurement controller through an open API. We have collected wireless performance traces from 30 homes for a period in excess of 6 months. To analyze the characteristics of these home wireless environments, we have also developed a simple metric that estimates the likely TCP throughput different clients can expect based on current channel and environmental conditions. With this infrastructure, we provide multiple quantitative observations, some of which are anecdotally understood in our community. For example, while a majority of links performed well most of the time, we observed cases of poor client experience about 2.1 of the total time."
]
} |
1411.6685 | 2952242212 | Recent experimental studies confirm the prevalence of the widely known performance anomaly problem in current Wi-Fi networks, and report on the severe network utility degradation caused by this phenomenon. Although a large body of work addressed this issue, we attribute the refusal of prior solutions to their poor implementation feasibility with off-the-shelf hardware and their imprecise modelling of the 802.11 protocol. Their applicability is further challenged today by very high throughput enhancements (802.11n ac) whereby link speeds can vary by two orders of magnitude. Unlike earlier approaches, in this paper we introduce the first rigorous analytical model of 802.11 stations' throughput and airtime in multi-rate settings, without sacrificing accuracy for tractability. We use the proportional-fair allocation criterion to formulate network utility maximisation as a convex optimisation problem for which we give a closed-form solution. We present a fully functional light-weight implementation of our scheme on commodity access points and evaluate this extensively via experiments in a real deployment, over a broad range of network conditions. Results demonstrate that our proposal achieves up to 100 utility gains, can double video streaming goodput and reduces TCP download times by 8x. | @cite_33 undertakes an empirical study to find the contention window settings that achieve proportional fairness, but lacks analytical support and is limited to static scenarios. Heusse propose to control stations' transmission opportunities based on the observed number of idle slots, to tackle airtime fairness @cite_20 ; however, the implementation requires precise time synchronisation and is tightly coupled to specific hardware and a proprietary closed-source firmware @cite_32 . Lee implement O-DCF @cite_0 , whereby a station's packet rate is adjusted according to the MCS employed, to improve network utility. This approach requires introducing an additional queueing layer between application and driver. Similarly, individual queues are introduced and controlled for each destination in WiFox @cite_30 . ADWISER @cite_7 tackles rate anomaly only in the , by introducing a dedicated network entity that performs scheduling before the AP. * 0.5em | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_7",
"@cite_32",
"@cite_0",
"@cite_20"
],
"mid": [
"2120239653",
"2012321831",
"2018652527",
"2165682314",
"2428951329",
"2104935654"
],
"abstract": [
"WiFi-based wireless LANs (WLANs) are widely used for Internet access. They were designed such that an Access Points (AP) serves few associated clients with symmetric uplink downlink traffic patterns. Usage of WiFi hotspots in locations such as airports and large conventions frequently experience poor performance in terms of downlink goodput and responsiveness. We study the various factors responsible for this performance degradation. We analyse and emulate a large conference network environment on our testbed with 45 nodes. We find that presence of asymmetry between the uplink downlink traffic results in backlogged packets at WiFi Access Point's (AP's) transmission queue and subsequent packet losses. This traffic asymmetry results in maximum performance loss for such an environment along with degradation due to rate diversity, fairness and TCP behaviour. We propose our solution WiFox, which (1) adaptively prioritizes AP's channel access over competing STAs avoiding traffic asymmetry (2) provides a fairness framework alleviating the problem of performance loss due to rate-diversity fairness and (3) avoids degradation due to TCP behaviour. We demonstrate that WiFox not only improves downlink goodput by 400-700 but also reduces request's average response time by 30-40 .",
"We investigate the optimal selection of minimum contention window values to achieve proportional fairness in a multirate IEEE 802.11e test-bed. Unlike other approaches, the proposed model accounts for the contention-based nature of 802.11's MAC layer operation and considers the case where stations can have different weights corresponding to different throughput classes. Our test-bed evaluation considers both the long-term throughput achieved by wireless stations and the short-term fairness. When all stations have the same transmission rate, optimality is achieved when a station's throughput is proportional to its weight factor, and the optimal minimum contention windows also maximize the aggregate throughput. When stations have different transmission rates, the optimal minimum contention window for high rate stations is smaller than for low rate stations. Furthermore, we compare proportional fairness with time-based fairness, which can be achieved by adjusting packet sizes so that low and high rate stations have equal successful transmission times, or by adjusting the transmission opportunity (TXOP)limit so that high rate stations transmit multiple back-to-back packets and thus occupy the channel for the same time as low rate stations that transmit a single packet. The test-bed experiments show that when stations have different transmission rates and the same weight, proportional fairness achieves higher performance than the time-based fairness approaches, in terms of both aggregate utility and throughput.",
"We present a centralized integrated approach for: 1) enhancing the performance of an IEEE 802.11 infrastructure wireless local area network (WLAN), and 2) managing the access link that connects the WLAN to the Internet. Our approach, which is implemented on a standard Linux platform, and which we call ADvanced Wi-fi Internet Service EnhanceR (ADWISER), is an extension of our previous system WLAN Manager (WM). ADWISER addresses several infrastructure WLAN performance anomalies such as mixed-rate inefficiency, unfair medium sharing between simultaneous TCP uploads and downloads, and inefficient utilization of the Internet access bandwidth when Internet transfers compete with LAN-WLAN transfers, etc. The approach is via centralized queueing and scheduling, using a novel, configurable, cascaded packet queueing and scheduling architecture, with an adaptive service rate. In this paper, we describe the design of ADWISER and report results of extensive experimentation conducted on a hybrid testbed consisting of real end-systems and an emulated WLAN on Qualnet. We also present results from a physical testbed consisting of one access point (AP) and a few end-systems.",
"An overwhelming part of research work on wireless networks validates new concepts or protocols with simulation or analytical modeling. Unlike this approach, we present our experience with implementing the Idle Sense access method on programmable off-the-shelf hardware---the Intel IPW2915 abg chipset. We also present measurements and performance comparisons of Idle Sense with respect to the Intel implementation of the 802.11 DCF (Distributed Coordination Function) standard. Implementing a modified MAC protocol on constrained devices presents several challenges: difficulty of programming without support for multiplication, division, and floating point arithmetic, absence of support for debugging and high precision measurement. To achieve our objectives, we had to overcome the limitations of the hardware platform and solve several issues. In particular, we have implemented the adaptation algorithm with approximate values of control parameters without the division operation and taken advantage of some fields in data frames to trace the execution and test the implemented access method. Finally, we have measured its performance to confirm good properties of Idle Sense: it obtains slightly better throughput, much better fairness, and significantly lower collision rate compared to the Intel implementation of the 802.11 DCF standard.",
"This paper proposes a new protocol called Optimal DCF (O-DCF). O-DCF modifies the rule of adapting CSMA parameters, such as backoff time and transmission length, based on a function of the demand--supply differential of link capacity captured by the local queue length. O-DCF is fully compatible with 802.11 hardware, so that it can be easily implemented only with a simple device driver update. O-DCF is inspired by the recent analytical studies proven to be optimal under assumptions, which often generates a big gap between theory and practice. O-DCF effectively bridges such a gap, which is implemented in off-the-shelf 802.11 chipset. Through extensive simulations and real experiments with a 16-node wireless network testbed, we evaluate the performance of O-DCF and show that it achieves near-optimality in terms of throughput and fairness and outperforms other competitive ones, such as 802.11 DCF, optimal CSMA, and DiffQ for various scenarios. Also, we consider the coexistence of O-DCF and 802.11 DCF and show that O-DCF fairly shares the medium with 802.11 via its parameter control.",
"We consider wireless LANs such as IEEE 802.11 operating in the unlicensed radio spectrum. While their nominal bit rates have increased considerably, the MAC layer remains practically unchanged despite much research effort spent on improving its performance. We observe that most proposals for tuning the access method focus on a single aspect and disregard others. Our objective is to define an access method optimized for throughput and fairness, able to dynamically adapt to physical channel conditions, to operate near optimum for a wide range of error rates, and to provide equal time shares when hosts use different bit rates.We propose a novel access method derived from 802.11 DCF [2] (Distributed Coordination Function) in which all hosts use similar values of the contention window CW to benefit from good short-term access fairness. We call our method Idle Sense, because each host observes the mean number of idle slots between transmission attempts to dynamically control its contention window. Unlike other proposals, Idle Sense enables each host to estimate its frame error rate, which can be used for switching to the right bit rate. We present simulations showing how the method leads to high throughput, low collision overhead, and low delay. The method also features fast reactivity and time-fair channel allocation."
]
} |
1411.5838 | 2951402285 | In this paper, we explore the multiple source localisation problem in the cerebral cortex using magnetoencephalography (MEG) data. We model neural currents as point-wise dipolar sources which dynamically evolve over time, then model dipole dynamics using a probabilistic state space model in which dipole locations are strictly constrained to lie within the cortex. Based on the proposed models, we develop a Bayesian particle filtering algorithm for localisation of both known and unknown numbers of dipoles. The algorithm consists of a region of interest (ROI) estimation step for initial dipole number estimation, a Gibbs multiple particle filter (GMPF) step for individual dipole state estimation, and a selection criterion step for selecting the final estimates. The estimated results from the ROI estimation are used to adaptively adjust particle filter's sample size to reduce the overall computational cost. The proposed models and the algorithm are tested in numerical experiments. Results are compared with existing particle filtering methods. The numerical results show that the proposed methods can achieve improved performance metrics in terms of dipole number estimation and dipole localisation. | There are two main types of methods: distributed source approaches, and point-wise dipole approaches @cite_17 . Distributed source methods identify the potential active brain sources that are distributed on a dense grid of fixed locations throughout the whole cerebral cortex (or the whole brain volume if under a looser constraint). Since the number of unknown sources is larger than the number of the M EEG sensors, mathematical assumptions or constraints are required for an unique solution. Some existing methods include the least squares minimum norm estimation (MNE) @cite_17 , dynamic statistical parametric mapping (dSPM) @cite_29 , standardized low-resolution electromagnetic tomography (sLORETA) @cite_13 , and Kalman filter related approaches @cite_10 @cite_4 . | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_29",
"@cite_10",
"@cite_17"
],
"mid": [
"245658",
"",
"2013150886",
"2101979660",
"2084333685"
],
"abstract": [
"Scalp electric potentials (electroencephalograms) and extracranial magnetic fields (magnetoencephalograms, are due to the primary (impressed) current density distribution that arises from neuronal postsynaptic processes. A solution to the inverse problem-the computation of images of electric neuronal activity based on extracranial measurements-would provide important information on the time-course and localization of brain function. In general, there is no unique solution to this problem. In particular, an instantaneous, distributed, discrete, linear solution capable of exact localization of point sources is of great interest, since the principles of linearity and superposition would guarantee its trustworthiness as a functional imaging method, given that brain activity occurs in the form of a finite number of distributed hot spots. Despite all previous efforts, linear solutions, at best, produced images with systematic nonzero localization errors. A solution reported here yields images of standardized current density with zero local-ization error. The purpose of this paper is to present the technical details of the method, allowing researchers to test, check, reproduce and yalidate the new method.",
"",
"Abstract Functional magnetic resonance imaging (fMRI) can provide maps of brain activation with millimeter spatial resolution but is limited in its temporal resolution to the order of seconds. Here, we describe a technique that combines structural and functional MRI with magnetoencephalography (MEG) to obtain spatiotemporal maps of human brain activity with millisecond temporal resolution. This new technique was used to obtain dynamic statistical parametric maps of cortical activity during semantic processing of visually presented words. An initial wave of activity was found to spread rapidly from occipital visual cortex to temporal, pariet al, and frontal areas within 185 ms, with a high degree of temporal overlap between different areas. Repetition effects were observed in many of the same areas following this initial wave of activation, providing evidence for the involvement of feedback mechanisms in repetition priming.",
"We present a new approach for estimating solutions of the dynamical inverse problem of EEG generation. In contrast to previous approaches, we reinterpret this problem as a filtering problem in a state space framework; for the purpose of its solution, we propose a new extension of Kalman filtering to the case of spatiotemporal dynamics. The temporal evolution of the distributed generators of the EEG can be reconstructed at each voxel of a discretisation of the gray matter of brain. By fitting linear autoregressive models with neighbourhood interactions to EEG time series, new classes of inverse solutions with improved resolution and localisation ability can be explored. For the purposes of model comparison and parameter estimation from given data, we employ a likelihood maximisation approach. Both for instantaneous and dynamical inverse solutions, we derive estimators of the time-dependent estimation error at each voxel. The performance of the algorithm is demonstrated by application to simulated and clinical EEG recordings. It is shown that by choosing appropriate dynamical models, it becomes possible to obtain inverse solutions of considerably improved quality, as compared to the usual instantaneous inverse solutions.",
"Magnetoencephalography (MEG) is a noninvasive technique for investigating neuronal activity in the living human brain. The time resolution of the method is better than 1 ms and the spatial discrimination is, under favorable circumstances, 2--3 mm for sources in the cerebral cortex. In MEG studies, the weak 10 fT--1 pT magnetic fields produced by electric currents flowing in neurons are measured with multichannel SQUID (superconducting quantum interference device) gradiometers. The sites in the cerebral cortex that are activated by a stimulus can be found from the detected magnetic-field distribution, provided that appropriate assumptions about the source render the solution of the inverse problem unique. Many interesting properties of the working human brain can be studied, including spontaneous activity and signal processing following external stimuli. For clinical purposes, determination of the locations of epileptic foci is of interest. The authors begin with a general introduction and a short discussion of the neural basis of MEG. The mathematical theory of the method is then explained in detail, followed by a thorough description of MEG instrumentation, data analysis, and practical construction of multi-SQUID devices. Finally, several MEG experiments performed in the authors' laboratory are described, covering studies of evoked responses and of spontaneousmore » activity in both healthy and diseased brains. Many MEG studies by other groups are discussed briefly as well.« less"
]
} |
1411.6202 | 2293815154 | It has been widely recognized that the performance of a multi-agent system is highly affected by its organization. A large scale system may have billions of possible ways of organization, which makes it impractical to find an optimal choice of organization using exhaustive search methods. In this paper, we propose a genetic algorithm aided optimization scheme for designing hierarchical structures of multi-agent systems. We introduce a novel algorithm, called the hierarchical genetic algorithm, in which hierarchical crossover with a repair strategy and mutation of small perturbation are used. The phenotypic hierarchical structure space is translated to the genome-like array representation space, which makes the algorithm genetic-operator-literate. A case study with 10 scenarios of a hierarchical information retrieval model is provided. Our experiments have shown that competitive baseline structures which lead to the optimal organization in terms of utility can be found by the proposed algorithm during the evolutionary search. Compared with the traditional genetic operators, the newly introduced operators produced better organizations of higher utility more consistently in a variety of test cases. The proposed algorithm extends of the search processes of the state-of-the-art multi-agent organization design methodologies, and is more computationally efficient in a large search space. | The design of a multi-agent system organization has been investigated by many researchers. Early methodologies such as Gaia @cite_22 and OMNI @cite_5 aim to assist the manual design process of agent organizations. Instead of relying heavily on the expertise of human designers, it is desirable to automate the process of producing multi-agent organization designs. In this sense, a quantitative measurement of a set of metrics is needed to rapidly and precisely predict the performance of the MAS. With these metrics we can evaluate a number of organization instances, rank them, and select the best one without introducing heavy cost by actually implementing the organization designs. | {
"cite_N": [
"@cite_5",
"@cite_22"
],
"mid": [
"2133149803",
"2111877087"
],
"abstract": [
"Despite all the research done in the last years on the development of methodologies for designing MAS, there is no methodology suitable for the specification and design of MAS in complex domains where both the agent view and the organizational view can be modeled. Current multiagent approaches either take a centralist, static approach to organizational design or take an emergent view in which agent interactions are not pre-determined, thus making it impossible to make any predictions on the behavior of the whole systems. Most of them also lack a model of the norms in the environment that should rule the (emergent) behavior of the agent society as a whole and or the actions of individuals. In this paper, we propose a framework for modeling agent organizations, Organizational Model for Normative Institutions (OMNI), that allows the balance of global organizational requirements with the autonomy of individual agents. It specifies global goals of the system independently from those of the specific agents that populate the system. Both the norms that regulate interaction between agents, as well as the contextual meaning of those interactions are important aspects when specifying the organizational structure.",
"This article presents Gaia: a methodology for agent-oriented analysis and design. The Gaia methodology is both general, in that it is applicable to a wide range of multi-agent systems, and comprehensive, in that it deals with both the macro-level (societ al) and the micro-level (agent) aspects of systems. Gaia is founded on the view of a multi-agent system as a computational organisation consisting of various interacting roles. We illustrate Gaia through a case study (an agent-based business process management system)."
]
} |
1411.6202 | 2293815154 | It has been widely recognized that the performance of a multi-agent system is highly affected by its organization. A large scale system may have billions of possible ways of organization, which makes it impractical to find an optimal choice of organization using exhaustive search methods. In this paper, we propose a genetic algorithm aided optimization scheme for designing hierarchical structures of multi-agent systems. We introduce a novel algorithm, called the hierarchical genetic algorithm, in which hierarchical crossover with a repair strategy and mutation of small perturbation are used. The phenotypic hierarchical structure space is translated to the genome-like array representation space, which makes the algorithm genetic-operator-literate. A case study with 10 scenarios of a hierarchical information retrieval model is provided. Our experiments have shown that competitive baseline structures which lead to the optimal organization in terms of utility can be found by the proposed algorithm during the evolutionary search. Compared with the traditional genetic operators, the newly introduced operators produced better organizations of higher utility more consistently in a variety of test cases. The proposed algorithm extends of the search processes of the state-of-the-art multi-agent organization design methodologies, and is more computationally efficient in a large search space. | In @cite_17 , an organizational design modeling language (ODML) was proposed, and the utility value was defined as the quantitative measurement of the performance of a distributed sensor network and an information retrieval system. Several approaches, including the exploitation of hard constraints and equivalence classes, parallel search, and the use of abstraction, have been studied in order to reduce the complexity of searching for a valid optimal organization. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2064567069"
],
"abstract": [
"As the scale and scope of distributed and multi-agent systems grow, it becomes increasingly important to design and manage the participants' interactions. The potential for bottlenecks, intractably large sets of coordination partners, and shared bounded resources can make individual and high-level goals difficult to achieve. To address these problems, many large systems employ an additional layer of structuring, known as an organizational design, that assigns agents different roles, responsibilities and peers. These additional constraints can allow agents to operate more efficiently within the system by limiting the options they must consider. Different designs applied to the same problem will have different performance characteristics, therefore it is important to understand the behavior of competing candidate designs. In this article, we describe a new representation for capturing such designs, and in particular we show how quantitative information can form the basis of a flexible, predictive organizational model. The representation is capable of capturing a wide range of multi-agent characteristics in a single, succinct model. We demonstrate the language's capabilities and efficacy by comparing a range of metrics predicted by detailed models of a distributed sensor network and information retrieval system to empirical results. These same models also describe the space of possible organizations in those domains and several search techniques are described that can be used to explore this space, using those quantitative predictions and context-specific definitions of utility to evaluate alternatives. The results of such a search process can be used to select the organizational design most appropriate for a given situation."
]
} |
1411.6202 | 2293815154 | It has been widely recognized that the performance of a multi-agent system is highly affected by its organization. A large scale system may have billions of possible ways of organization, which makes it impractical to find an optimal choice of organization using exhaustive search methods. In this paper, we propose a genetic algorithm aided optimization scheme for designing hierarchical structures of multi-agent systems. We introduce a novel algorithm, called the hierarchical genetic algorithm, in which hierarchical crossover with a repair strategy and mutation of small perturbation are used. The phenotypic hierarchical structure space is translated to the genome-like array representation space, which makes the algorithm genetic-operator-literate. A case study with 10 scenarios of a hierarchical information retrieval model is provided. Our experiments have shown that competitive baseline structures which lead to the optimal organization in terms of utility can be found by the proposed algorithm during the evolutionary search. Compared with the traditional genetic operators, the newly introduced operators produced better organizations of higher utility more consistently in a variety of test cases. The proposed algorithm extends of the search processes of the state-of-the-art multi-agent organization design methodologies, and is more computationally efficient in a large search space. | Another organization designer, KB-ORG, which also incorporates quantitative utility as a user evaluation criterion, was proposed for multi-agent systems in @cite_18 . It uses both application-level and coordination-level organization design knowledge to explore the search space of candidate organizations selectively. This approach significantly reduces the exploration effort required to produce effective designs as compared to modeling and evaluation-based approaches that do not incorporate designer expertise. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2144804481"
],
"abstract": [
"The ability to create effective multi-agent organizations is key to the development of larger, more diverse multi-agent systems. In this article we present KB-ORG: a fully automated, knowledge-based organization designer for multi-agent systems. Organization design is the process that accepts organizational goals, environmental expectations, performance requirements, role characterizations, and agent descriptions and assigns roles to each agent. These long-term roles serve as organizational-control guidelines that are used by each agent in making moment-to-moment operational control decisions. An important aspect of KB-ORG is its efficient, knowledge-informed search process for designing multi-agent organizations. KB-ORG uses both application-level and coordination-level organization design knowledge to explore the combinatorial search space of candidate organizations selectively. KB-ORG also delays making coordination-level organizational decisions until it has explored and elaborated candidate application-level agent roles. This approach significantly reduces the exploration effort required to produce effective designs as compared to modeling and evaluation-based approaches that do not incorporate design expertise. KB-ORG designs are not restricted to a single organization form such as a hierarchy, and the organization designs described here contain both hierarchical and peer-to-peer elements. We use examples from the distributed sensor network (DSN) domain to show how KB-ORG uses situational parameters as well as application-level and coordination-level knowledge to generate organization designs. We also show that KB-ORG designs effective, yet substantially different, organizations when given different organizational requirements and environmental expectations."
]
} |
1411.6202 | 2293815154 | It has been widely recognized that the performance of a multi-agent system is highly affected by its organization. A large scale system may have billions of possible ways of organization, which makes it impractical to find an optimal choice of organization using exhaustive search methods. In this paper, we propose a genetic algorithm aided optimization scheme for designing hierarchical structures of multi-agent systems. We introduce a novel algorithm, called the hierarchical genetic algorithm, in which hierarchical crossover with a repair strategy and mutation of small perturbation are used. The phenotypic hierarchical structure space is translated to the genome-like array representation space, which makes the algorithm genetic-operator-literate. A case study with 10 scenarios of a hierarchical information retrieval model is provided. Our experiments have shown that competitive baseline structures which lead to the optimal organization in terms of utility can be found by the proposed algorithm during the evolutionary search. Compared with the traditional genetic operators, the newly introduced operators produced better organizations of higher utility more consistently in a variety of test cases. The proposed algorithm extends of the search processes of the state-of-the-art multi-agent organization design methodologies, and is more computationally efficient in a large search space. | Evolutionary based search mechanisms have been used to help the design of MAS organizations on a few occasions. For example, in @cite_25 , a GA-based algorithm is proposed for coalition structure formation which aims at achieving the goals of high performance, scalability, and fast convergence rate simultaneously. And in @cite_23 , a heuristic search method, called evolutionary organizational search (EOS), which is based on genetic programming (GP), was introduced. A review of evolutionary methodologies, mostly involving co-evolution, for the engineering of multi-agent market mechanisms, can also be found in @cite_9 . These techniques show a promising direction to deal with the organization search in hierarchical multi-agent systems, as exhaustive methods, such as breadth-first search and depth-first search, become inefficient and impractical in a large search space. | {
"cite_N": [
"@cite_9",
"@cite_25",
"@cite_23"
],
"mid": [
"2078587062",
"2085888329",
"133077321"
],
"abstract": [
"The advent of large-scale distributed systems poses unique engineering challenges. In open systems such as the internet it is not possible to prescribe the behaviour of all of the components of the system in advance. Rather, we attempt to design infrastructure, such as network protocols, in such a way that the overall system is robust despite the fact that numerous arbitrary, non-certified, third-party components can connect to our system. Economists have long understood this issue, since it is analogous to the design of the rules governing auctions and other marketplaces, in which we attempt to achieve socially-desirable outcomes despite the impossibility of prescribing the exact behaviour of the market participants, who may attempt to subvert the market for their own personal gain. This field is known as \"mechanism design\": the science of designing rules of a game to achieve a specific outcome, even though each participant may be self-interested. Although it originated in economics, mechanism design has become an important foundation of multi-agent systems (MAS) research. In a traditional mechanism design problem, analytical methods are used to prove that agents' game-theoretically optimal strategies lead to socially desirable outcomes. In many scenarios, traditional mechanism design and auction theory yield clear-cut results; however, there are many situations in which the underlying assumptions of the theory are violated due to the messiness of the real-world. In this paper we review alternative approaches to mechanism design which treat it as an engineering problem and bring to bear engineering design principles, viz.: iterative step-wise refinement of solutions, and satisficing instead of optimization in the face of intractable complexity. We categorize these approaches under the banner of evolutionary mechanism design.",
"As an important coordination and cooperation mechanism in multi-agent systems, coalition of agents exhibits some excellent characteristics and draws researchers' attention increasingly. Cooperation formation has been a very active area of research in multi-agent systems. An efficient algorithm is needed for this topic since the numbers of the possible coalitions are exponential in the number of agents. Genetic algorithm (GA) has been widely reckoned as a useful tool for obtaining high quality and optimal solutions for a broad range of combinatorial optimization problems due to its intelligent advantages of self-organization, self-adaptation and inherent parallelism. This paper proposes a GA-based algorithm for coalition structure formation which aims at achieving goals of high performance, scalability, and fast convergence rate simultaneously. A novel 2D binary chromosome encoding approach and corresponding crossover and mutation operators are presented in this paper. Two valid parental chromosomes are certain to produce a valid offspring under the operation of the crossover operator. This improves the efficiency and shortens the running time greatly. The proposed algorithm is evaluated through a robust comparison with heuristic search algorithms. We have confirmed that our new algorithm is robust, self-adaptive and very efficient by experiments. The results of the proposed algorithm are found to be satisfactory.",
"In this paper, we proposed Evolutionary Organizational Search (EOS), an optimization method for the organizational control of multi-agent systems (MASs) based on genetic programming (GP). EOS adds to the existing armory a metaheuristic extension, which is capable of efficient search and less vulnerable to stalling at local optima than greedy methods due to its stochastic nature. EOS employs a flexible genotype which can be applied to a wide range of tree-shaped organizational forms. EOS also considers special constraints of MASs. A novel mutation operator, the redistribution operator, was proposed. Experiments optimizing an information retrieval system illustrated the adaptation of solutions generated by EOS to environmental changes."
]
} |
1411.6024 | 249046060 | In this paper, we design a new quantum key distribution protocol, allowing two limited semi-quantum or "classical" users to establish a shared secret key with the help of a fully quantum server. A semi-quantum user can only prepare and measure qubits in the computational basis and so must rely on this quantum server to produce qubits in alternative bases and also to perform alternative measurements. However, we assume that the sever is untrusted and we prove the unconditional security of our protocol even in the worst case: when this quantum server is an all-powerful adversary. We also compute a lower bound of the key rate of our protocol, in the asymptotic scenario, as a function of the observed error rate in the channel allowing us to compute the maximally tolerated error of our protocol. Our results show that a semi-quantum protocol may hold similar security to a fully quantum one. | The first multi-user QKD protocol that we are aware of was developed by @cite_23 and it involved the center @math sending Bell states, with one particle sent to each party. Encoding was done by @math and @math performing unitary operations on their respective qubits and returning the result to the center, who performed a measurement, informing @math and @math of the result. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2012145584"
],
"abstract": [
"Abstract Quantum cryptography has been shown to be an effective technology for the secure distribution of keys on point-to-point optical links. We show how the existing techniques can be extended to allow multi-user secure key distribution on optical networks. We demonstrate that using network configurations typical of those found in passive optical network architectures any of the current quantum key distribution protocols can be adapted to implement secure key distribution from any user to any other user. An important feature of these adapted protocols is that the broadcaster, or service provider on the network, does not have to be trusted by the two users who wish to establish a key."
]
} |
1411.6024 | 249046060 | In this paper, we design a new quantum key distribution protocol, allowing two limited semi-quantum or "classical" users to establish a shared secret key with the help of a fully quantum server. A semi-quantum user can only prepare and measure qubits in the computational basis and so must rely on this quantum server to produce qubits in alternative bases and also to perform alternative measurements. However, we assume that the sever is untrusted and we prove the unconditional security of our protocol even in the worst case: when this quantum server is an all-powerful adversary. We also compute a lower bound of the key rate of our protocol, in the asymptotic scenario, as a function of the observed error rate in the channel allowing us to compute the maximally tolerated error of our protocol. Our results show that a semi-quantum protocol may hold similar security to a fully quantum one. | In @cite_12 , a system was described whereby @math and @math send particles, prepared in one of the states @math , or @math , to a quantum center who must then store them in a quantum memory until such future time when a secret key is desired. The protocols of @cite_17 @cite_25 @cite_15 @cite_26 required the users to measure in both @math and @math bases, while the protocols of @cite_25 @cite_29 require @math and @math to apply various unitary operators to the qubits arriving to them from the center. | {
"cite_N": [
"@cite_26",
"@cite_29",
"@cite_15",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"1978697231",
"",
"2099006207",
"2060930219",
"2023306990",
""
],
"abstract": [
"A new multi-user quantum key distribution protocol with mutual authentication is proposed on a star network. Here, two arbitrary users are able to perform key distribution with the assistance of a semi-trusted center. Bell states are used as information carriers and transmitted in a quantum channel between the center and one user. A keyed hash function is utilized to ensure the identities of three parties. Finally, the security of this protocol with respect to various kinds of attacks is discussed.",
"",
"The security of a multi-user quantum communication network protocol using χ-type entangled states (, J. Korean Phys. Soc. 61:1–5, 2012) is analyzed. We find that, by using one χ-type state in this protocol, two participants can only share 2 bits of information, not 4 bits as the authors stated. In addition, we give a special attack strategy by which an eavesdropper can elicit half of the secret information without being detected. Finally, we improve the protocol to be secure against all the present attacks.",
"We propose a theoretical scheme for secure quantum key distribution network following the ideas in quantum dense coding. In this scheme, the server of the network provides the service for preparing and measuring the Bell states, and the users encode the states with local unitary operations. For preventing the server from eavesdropping, we design a decoy when the particle is transmitted between the users. The scheme has high capacity as one particle carries two bits of information and its efficiency for qubits approaches 100 . Moreover, it is unnecessary for the users to store the quantum states, which makes this scheme more convenient in applications than others.",
"Quantum correlations between two particles show nonclassical properties that can be used for providing secure transmission of information. We present a quantum cryptographic system in which users store particles in a transmission center, where their quantum states are preserved using quantum memories. Correlations between the particles stored by two users are created upon request by projecting their product state onto a fully entangled state. Our system allows for secure communication between any pair of users who have particles in the same center. Unlike other quantum cryptographic systems, it can work without quantum channels and it is suitable for building a quantum cryptographic network. We also present a modified system with many centers. 1996 The American Physical Society.",
""
]
} |
1411.6091 | 2952240625 | All that structure from motion algorithms "see" are sets of 2D points. We show that these impoverished views of the world can be faked for the purpose of reconstructing objects in challenging settings, such as from a single image, or from a few ones far apart, by recognizing the object and getting help from a collection of images of other objects from the same class. We synthesize virtual views by computing geodesics on novel networks connecting objects with similar viewpoints, and introduce techniques to increase the specificity and robustness of factorization-based object reconstruction in this setting. We report accurate object shape reconstruction from a single image on challenging PASCAL VOC data, which suggests that the current domain of applications of rigid structure-from-motion techniques may be significantly extended. | Several recent papers have exploited class-specific knowledge to improve SfM. The goal in one line of work is to create denser, higher-quality reconstructions @cite_15 @cite_50 @cite_10 , not to regularize SfM from few images and typically requires 3D training data. Closer to our work, Bao and Savarese proposed to reason jointly over object detections and point correspondences @cite_1 to better constrain SfM when there are few scene points shared by different images. Our approach differs in that it focuses on reconstructing the shape of individual objects and can reconstruct from a single image of the target object. | {
"cite_N": [
"@cite_1",
"@cite_15",
"@cite_10",
"@cite_50"
],
"mid": [
"2060772243",
"2143255850",
"",
"2119493293"
],
"abstract": [
"Conventional rigid structure from motion (SFM) addresses the problem of recovering the camera parameters (motion) and the 3D locations (structure) of scene points, given observed 2D image feature points. In this paper, we propose a new formulation called Semantic Structure From Motion (SSFM). In addition to the geometrical constraints provided by SFM, SSFM takes advantage of both semantic and geometrical properties associated with objects in the scene (Fig. 1). These properties allow us to recover not only the structure and motion but also the 3D locations, poses, and categories of objects in the scene. We cast this problem as a max-likelihood problem where geometry (cameras, points, objects) and semantic information (object classes) are simultaneously estimated. The key intuition is that, in addition to image features, the measurements of objects across views provide additional geometrical constraints that relate cameras and scene parameters. These constraints make the geometry estimation process more robust and, in turn, make object detection more accurate. Our framework has the unique ability to: i) estimate camera poses only from object detections, ii) enhance camera pose estimation, compared to feature-point-based SFM algorithms, iii) improve object detections given multiple un-calibrated images, compared to independently detecting objects in single images. Extensive quantitative results on three datasets–LiDAR cars, street-view pedestrians, and Kinect office desktop–verify our theoretical claims.",
"We present a dense reconstruction approach that overcomes the drawbacks of traditional multiview stereo by incorporating semantic information in the form of learned category-level shape priors and object detection. Given training data comprised of 3D scans and images of objects from various viewpoints, we learn a prior comprised of a mean shape and a set of weighted anchor points. The former captures the commonality of shapes across the category, while the latter encodes similarities between instances in the form of appearance and spatial consistency. We propose robust algorithms to match anchor points across instances that enable learning a mean shape for the category, even with large shape variations across instances. We model the shape of an object instance as a warped version of the category mean, along with instance-specific details. Given multiple images of an unseen instance, we collate information from 2D object detectors to align the structure from motion point cloud with the mean shape, which is subsequently warped and refined to approach the actual shape. Extensive experiments demonstrate that our model is general enough to learn semantic priors for different object categories, yet powerful enough to reconstruct individual shapes with large variations. Qualitative and quantitative evaluations show that our framework can produce more accurate reconstructions than alternative state-of-the-art multiview stereo systems.",
"",
"We propose a formulation of monocular SLAM which combines live dense reconstruction with shape priors-based 3D tracking and reconstruction. Current live dense SLAM approaches are limited to the reconstruction of visible surfaces. Moreover, most of them are based on the minimisation of a photo-consistency error, which usually makes them sensitive to specularities. In the 3D pose recovery literature, problems caused by imperfect and ambiguous image information have been dealt with by using prior shape knowledge. At the same time, the success of depth sensors has shown that combining joint image and depth information drastically increases the robustness of the classical monocular 3D tracking and 3D reconstruction approaches. In this work we link dense SLAM to 3D object pose and shape recovery. More specifically, we automatically augment our SLAM system with object specific identity, together with 6D pose and additional shape degrees of freedom for the object(s) of known class in the scene, combining image data and depth information for the pose and shape recovery. This leads to a system that allows for full scaled 3D reconstruction with the known object(s) segmented from the scene. The segmentation enhances the clarity, accuracy and completeness of the maps built by the dense SLAM system, while the dense 3D data aids the segmentation process, yielding faster and more reliable convergence than when using 2D image data alone."
]
} |
1411.5799 | 2950233531 | Factor analysis provides linear factors that describe relationships between individual variables of a data set. We extend this classical formulation into linear factors that describe relationships between groups of variables, where each group represents either a set of related variables or a data set. The model also naturally extends canonical correlation analysis to more than two sets, in a way that is more flexible than previous extensions. Our solution is formulated as variational inference of a latent variable model with structural sparsity, and it consists of two hierarchical levels: The higher level models the relationships between the groups, whereas the lower models the observed variables given the higher level. We show that the resulting solution solves the group factor analysis problem accurately, outperforming alternative factor analysis based solutions as well as more straightforward implementations of group factor analysis. The method is demonstrated on two life science data sets, one on brain activation and the other on systems biology, illustrating its applicability to the analysis of different types of high-dimensional data sources. | For two groups, @math , the model is equivalent to Bayesian CCA and inter-battery factor analysis @cite_4 ; some factors model the correlations whereas some describe the residual noise within either group. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2144903813"
],
"abstract": [
"Canonical correlation analysis (CCA) is a classical method for seeking correlations between two multivariate data sets. During the last ten years, it has received more and more attention in the machine learning community in the form of novel computational formulations and a plethora of applications. We review recent developments in Bayesian models and inference methods for CCA which are attractive for their potential in hierarchical extensions and for coping with the combination of large dimensionalities and small sample sizes. The existing methods have not been particularly successful in fulfilling the promise yet; we introduce a novel efficient solution that imposes group-wise sparsity to estimate the posterior of an extended model which not only extracts the statistical dependencies (correlations) between data sets but also decomposes the data into shared and data set-specific components. In statistics literature the model is known as inter-battery factor analysis (IBFA), for which we now provide a Bayesian treatment."
]
} |
1411.5799 | 2950233531 | Factor analysis provides linear factors that describe relationships between individual variables of a data set. We extend this classical formulation into linear factors that describe relationships between groups of variables, where each group represents either a set of related variables or a data set. The model also naturally extends canonical correlation analysis to more than two sets, in a way that is more flexible than previous extensions. Our solution is formulated as variational inference of a latent variable model with structural sparsity, and it consists of two hierarchical levels: The higher level models the relationships between the groups, whereas the lower models the observed variables given the higher level. We show that the resulting solution solves the group factor analysis problem accurately, outperforming alternative factor analysis based solutions as well as more straightforward implementations of group factor analysis. The method is demonstrated on two life science data sets, one on brain activation and the other on systems biology, illustrating its applicability to the analysis of different types of high-dimensional data sources. | Most multi-set extensions of CCA, however, are not equivalent to our model. For example, @cite_12 and @cite_18 extend CCA for @math , but instead of GFA they solve the more limited problem of multiple-battery factor analysis @cite_29 @cite_2 . The MBFA models provide one set of factors that describe the relationships between groups, and then model the variation specific to each group either with a free covariance matrix or a separate set of factors for that group. Besides the multi-set extensions of CCA, also the probabilistic interpretation of sparse matrix factorization @cite_11 , and the JIVE model for integrated analysis of multiple data types @cite_10 @cite_20 belong to the family of MBFA models. These models differ in their priors, parameterization and inference, but are all conceptually equivalent. | {
"cite_N": [
"@cite_18",
"@cite_29",
"@cite_2",
"@cite_20",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"1984946761",
"",
"2052637985",
"",
"2065760681",
"2126497681",
"1968244248"
],
"abstract": [
"Abstract In this paper we describe a method for functional connectivity analysis of fMRI data between given brain regions-of-interest (ROIs). The method relies on nonnegativity constrained- and spatially regularized multiset canonical correlation analysis (CCA), and assigns weights to the fMRI signals of the ROIs so that their representative signals become simultaneously maximally correlated. The different pairwise correlations between the representative signals of the ROIs are combined using the maxvar approach for multiset CCA, which has been shown to be equivalent to the generalized eigenvector formulation of CCA. The eigenvector in the maxvar approach gives an indication of the relative importance of each ROI in obtaining a maximal overall correlation, and hence, can be interpreted as a functional connectivity pattern of the ROIs. The successive canonical correlations define subsequent functional connectivity patterns, in decreasing order of importance. We apply our method on synthetic data and real fMRI data and show its advantages compared to unconstrained CCA and to PCA. Furthermore, since the representative signals for the ROIs are optimized for maximal correlation they are also ideally suited for further effective connectivity analyses, to assess the information flows between the ROIs in the brain.",
"",
"McDonald's (1970) generalization of Tucker's (1958) inter-battery factor analysis model to multiple batteries is considered. The identification of parameters is examined and an iterative algorithm for obtaining maximum-likelihood estimates is provided. Consideration of the relationship between inter-battery factor analysis and canonical correlation analysis in the case of two batteries suggests a generalization of canonical coefficients to the situation where there are several batteries. Examples of the application of the procedure are given.",
"",
"Research in several fields now requires the analysis of datasets in which multiple high-dimensional types of data are available for a common set of objects. In particular, The Cancer Genome Atlas (TCGA) includes data from several diverse genomic technologies on the same cancerous tumor samples. In this paper we introduce Joint and Individual Variation Explained (JIVE), a general decomposition of variation for the integrated analysis of such datasets. The decomposition consists of three terms: a low-rank approximation capturing joint variation across data types, low-rank approximations for structured variation individual to each data type, and residual noise. JIVE quantifies the amount of joint variation between data types, reduces the dimensionality of the data, and provides new directions for the visual exploration of joint and individual structure. The proposed method represents an extension of Principal Component Analysis and has clear advantages over popular two-block methods such as Canonical Correlation Analysis and Partial Least Squares. A JIVE analysis of gene expression and miRNA data on Glioblastoma Multiforme tumor samples reveals gene-miRNA associations and provides better characterization of tumor types.",
"We present a generative model for performing sparse probabilistic projections, which includes sparse principal component analysis and sparse canonical correlation analysis as special cases. Sparsity is enforced by means of automatic relevance determination or by imposing appropriate prior distributions, such as generalised hyperbolic distributions. We derive a variational Expectation-Maximisation algorithm for the estimation of the hyperparameters and show that our novel probabilistic approach compares favourably to existing techniques. We illustrate how the proposed method can be applied in the context of cryptoanalysis as a preprocessing tool for the construction of template attacks.",
"Building a common representation for several related data sets is an important problem in multi-view learning. CCA and its extensions have shown that they are effective in finding the shared variation among all data sets. However, these models generally fail to exploit the common structure of the data when the views are with private information. Recently, methods explicitly modeling the information into shared part and private parts have been proposed, but they presume to know the prior knowledge about the latent space, which is usually impossible to obtain. In this paper, we propose a probabilistic model, which could simultaneously learn the structure of the latent space whilst factorize the information correctly, therefore the prior knowledge of the latent space is unnecessary. Furthermore, as a probabilistic model, our method is able to deal with missing data problem in a natural way. We show that our approach attains the performance of state-of-art methods on the task of human pose estimation when the motion capture view is completely missing, and significantly improves the inference accuracy with only a few observed data."
]
} |
1411.5799 | 2950233531 | Factor analysis provides linear factors that describe relationships between individual variables of a data set. We extend this classical formulation into linear factors that describe relationships between groups of variables, where each group represents either a set of related variables or a data set. The model also naturally extends canonical correlation analysis to more than two sets, in a way that is more flexible than previous extensions. Our solution is formulated as variational inference of a latent variable model with structural sparsity, and it consists of two hierarchical levels: The higher level models the relationships between the groups, whereas the lower models the observed variables given the higher level. We show that the resulting solution solves the group factor analysis problem accurately, outperforming alternative factor analysis based solutions as well as more straightforward implementations of group factor analysis. The method is demonstrated on two life science data sets, one on brain activation and the other on systems biology, illustrating its applicability to the analysis of different types of high-dimensional data sources. | Other models using group-wise sparsity for regression have also been presented, most notably group lasso @cite_37 @cite_31 that uses a group norm for regularizing linear regression. Compared to GFA, lasso lacks the advantages of factor regression; for multiple output cases it predicts each variable independently, instead of learning a latent representation that captures the relationships between the inputs and outputs. GFA has the further advantage that it learns the predictive models for not only all variables but in fact for all groups at the same time. Given a GFA solution one can make predictions for arbitrary subsets of the groups given another subset, instead of needing to specify in advance the split into explanatory and dependent variables. | {
"cite_N": [
"@cite_37",
"@cite_31"
],
"mid": [
"2138019504",
"2082562176"
],
"abstract": [
"Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis-of-variance problem as the most important and well-known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection. The lasso, the LARS algorithm and the non-negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods.",
"Penalized regression is an attractive framework for variable selection problems. Often, variables possess a grouping structure, and the relevant selection problem is that of selecting groups, not individual variables. The group lasso has been proposed as a way of extending the ideas of the lasso to the problem of group selection. Nonconvex penalties such as SCAD and MCP have been proposed and shown to have several advantages over the lasso; these penalties may also be extended to the group selection problem, giving rise to group SCAD and group MCP methods. Here, we describe algorithms for fitting these models stably and efficiently. In addition, we present simulation results and real data examples comparing and contrasting the statistical properties of these methods."
]
} |
1411.5795 | 2949571767 | Participatory sensing has emerged recently as a promising approach to large-scale data collection. However, without incentives for users to regularly contribute good quality data, this method is unlikely to be viable in the long run. In this paper, we link incentive to users' demand for consuming compelling services, as an approach complementary to conventional credit or reputation based approaches. With this demand-based principle, we design two incentive schemes, Incentive with Demand Fairness (IDF) and Iterative Tank Filling (ITF), for maximizing fairness and social welfare, respectively. Our study shows that the IDF scheme is max-min fair and can score close to 1 on the Jain's fairness index, while the ITF scheme maximizes social welfare and achieves a unique Nash equilibrium which is also Pareto and globally optimal. We adopted a game theoretic approach to derive the optimal service demands. Furthermore, to address practical considerations, we use a stochastic programming technique to handle uncertainty that is often encountered in real life situations. | In the context of wireless ad hoc networks, incentive was studied as a means to stimulate each node to forward packets for other nodes, under the assumption that nodes are self-interested and try to conserve their own energy and transmission bandwidth. The approaches can be broadly classified into credit-based and reputation-based categories. For instance, Butty ' a n and Hubaux @cite_13 proposed a credit-based mechanism using a virtual currency called nuglet: forwarding one packet for others will earn one nuglet while sending one own packet will consume one nuglet. Marbach @cite_11 , on the other hand, added flexibility by allowing each node to freely decide on a forwarding price as well as sending rate in an adaptive manner. In reputation-based systems such as @cite_15 , each node's behavior is observed and evaluated by its neighbors and will further induce rewards or punishments based on the evaluation. | {
"cite_N": [
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"2099376754",
"2135782021",
"2156046876"
],
"abstract": [
"In an ad-hoc network, intermediate nodes on a communication path are expected to forward packets of other nodes so that the mobile nodes can communicate beyond their wireless transmission range. However, because wireless mobile nodes are usually constrained by limited power and computation resources, a selfish node may be unwilling to spend its resources in forwarding packets which are not of its direct interest, even though it expects other nodes to forward its packets to the destination. It has been shown that the presence of such selfish nodes degrades the overall performance of a non-cooperative ad hoc network. To address this problem, we propose a secure and objective reputation-based incentive (SORI) scheme to encourage packet forwarding and discipline selfish behavior. Different from the existing schemes, under our approach, the reputation of a node is quantified by objective measures, and the propagation of reputation is efficiently secured by a one-way-hash-chain-based authentication scheme. Armed with the reputation-based mechanism, we design a punishment scheme to penalize selfish nodes. The experimental results show that the proposed scheme can successfully identify selfish nodes and punish them accordingly.",
"In military and rescue applications of mobile ad hoc networks, all the nodes belong to the same authority; therefore, they are motivated to cooperate in order to support the basic functions of the network. In this paper, we consider the case when each node is its own authority and tries to maximize the benefits it gets from the network. More precisely, we assume that the nodes are not willing to forward packets for the benefit of other nodes. This problem may arise in civilian applications of mobile ad hoc networks. In order to stimulate the nodes for packet forwarding, we propose a simple mechanism based on a counter in each node. We study the behavior of the proposed mechanism analytically and by means of simulations, and detail the way in which it could be protected against misuse.",
"We consider a market-based approach to stimulate cooperation in ad hoc networks where nodes charge a price for relaying data packets. Assuming that nodes set prices to maximize their own net benefit, we characterize the equilibria of the resulting market. In addition, we propose an iterative algorithm for the nodes to adapt their price and rate allocation, and study its convergence behavior. We use a numerical case study to illustrate our results."
]
} |
1411.5795 | 2949571767 | Participatory sensing has emerged recently as a promising approach to large-scale data collection. However, without incentives for users to regularly contribute good quality data, this method is unlikely to be viable in the long run. In this paper, we link incentive to users' demand for consuming compelling services, as an approach complementary to conventional credit or reputation based approaches. With this demand-based principle, we design two incentive schemes, Incentive with Demand Fairness (IDF) and Iterative Tank Filling (ITF), for maximizing fairness and social welfare, respectively. Our study shows that the IDF scheme is max-min fair and can score close to 1 on the Jain's fairness index, while the ITF scheme maximizes social welfare and achieves a unique Nash equilibrium which is also Pareto and globally optimal. We adopted a game theoretic approach to derive the optimal service demands. Furthermore, to address practical considerations, we use a stochastic programming technique to handle uncertainty that is often encountered in real life situations. | Recently, Park and van der Schaar @cite_4 introduced an intervention device that can take a variety of actions to influence users to cooperate and avoid inefficiency , under the assumption that the device can monitor a random access network, such as a CSMA network, perfectly. In our work, we focus on a different context, participatory sensing, and will show that our designed scheme achieves Pareto efficiency. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2152350936"
],
"abstract": [
"Overcoming the inefficiency of non-cooperative out-comes poses an important challenge for network managers in achieving efficient utilization of network resources. This paper studies a class of incentive schemes based on intervention, which are aimed to drive self-interested users towards a system objective. A manager can implement an intervention scheme by introducing in the network an intervention device that is able to monitor the actions of users and to take an action that influences the network usage of users. We consider the case of perfect monitoring, where the intervention device can immediately observe the actions of users without errors. We also assume that there exist actions of the intervention device that are most and least preferred by all users and the intervention device, regardless of the actions of users. We derive analytical results about the outcomes achievable with intervention and optimal intervention rules, and illustrate the results with an example based on random access networks."
]
} |
1411.6061 | 2101905790 | The structure of real-world social networks in large part determines the evolution of social phenomena, including opinion formation, diffusion of information and influence, and the spread of disease. Globally, network structure is characterized by features such as degree distribution, degree assortativity, and clustering coefficient. However, information about global structure is usually not available to each vertex. Instead, each vertex's knowledge is generally limited to the locally observable portion of the network consisting of the subgraph over its immediate neighbors. Such subgraphs, known as ego networks, have properties that can differ substantially from those of the global network. In this paper, we study the structural properties of ego networks and show how they relate to the global properties of networks from which they are derived. Through empirical comparisons and mathematical derivations, we show that structural features, similar to static attributes, suffer from paradoxes. We quantify the differences between global information about network structure and local estimates. This knowledge allows us to better identify and correct the biases arising from incomplete local information. | With traditional independent data, the global statistics of a population remain unbiased estimates for subsets. In networks, however, the complex dependencies can skew localized statistics, leading to inhomogeneity at different scales and positions. Numerous efforts have been made to develop generative models which can reproduce realistic structure with simple local algorithms @cite_8 @cite_21 @cite_26 @cite_5 . Unfortunately, structural features are so intertwined that preserving one often biases another. The same difficulty is also observed in graph sampling, where different sampling techniques can lead to different biases @cite_6 @cite_10 . | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_21",
"@cite_6",
"@cite_5",
"@cite_10"
],
"mid": [
"2016755074",
"2008620264",
"2033193852",
"2146008005",
"2000042664",
"1519020025"
],
"abstract": [
"We present a generator of random networks where both the degree-dependent clustering coefficient and the degree distribution are tunable. Following the same philosophy as in the configuration model, the degree distribution and the clustering coefficient for each class of nodes of degree @math are fixed ad hoc and a priori. The algorithm generates corresponding topologies by applying first a closure of triangles and second the classical closure of remaining free stubs. The procedure unveils an universal relation among clustering and degree-degree correlations for all networks, where the level of assortativity establishes an upper limit to the level of clustering. Maximum assortativity ensures no restriction on the decay of the clustering coefficient whereas disassortativity sets a stronger constraint on its behavior. Correlation measures in real networks are seen to observe this structural bound.",
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.",
"We study assortative mixing in networks, the tendency for vertices in networks to be connected to other vertices that are like (or unlike) them in some way. We consider mixing according to discrete characteristics such as language or race in social networks and scalar characteristics such as age. As a special example of the latter we consider mixing according to vertex degree, i.e., according to the number of connections vertices have to other vertices: do gregarious people tend to associate with other gregarious people? We propose a number of measures of assortative mixing appropriate to the various mixing types, and apply them to a variety of real-world networks, showing that assortative mixing is a pervasive phenomenon found in many networks. We also propose several models of assortatively mixed networks, both analytic ones based on generating function methods, and numerical ones based on Monte Carlo graph generation techniques. We use these models to probe the properties of networks as their level of assortativity is varied. In the particular case of mixing by degree, we find strong variation with assortativity in the connectivity of the network and in the resilience of the network to the removal of vertices.",
"Given a huge real graph, how can we derive a representative sample? There are many known algorithms to compute interesting measures (shortest paths, centrality, betweenness, etc.), but several of them become impractical for large graphs. Thus graph sampling is essential.The natural questions to ask are (a) which sampling method to use, (b) how small can the sample size be, and (c) how to scale up the measurements of the sample (e.g., the diameter), to get estimates for the large graph. The deeper, underlying question is subtle: how do we measure success?.We answer the above questions, and test our answers by thorough experiments on several, diverse datasets, spanning thousands nodes and edges. We consider several sampling methods, propose novel methods to check the goodness of sampling, and develop a set of scaling laws that describe relations between the properties of the original and the sample.In addition to the theoretical contributions, the practical conclusions from our work are: Sampling strategies based on edge selection do not perform well; simple uniform random node selection performs surprisingly well. Overall, best performing methods are the ones based on random-walks and \"forest fire\"; they match very accurately both static as well as evolutionary graph patterns, with sample sizes down to about 15 of the original graph.",
"Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution—the part of the distribution representing large but rare events—and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov (KS) statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data, while in others the power law is ruled out.",
"Department of Physics and Astronomy,University of New Mexico, Albuquerque NM 87131(aaron,moore)@cs.unm.edu(Dated: February 2, 2008)A great deal of effort has been spent measuring topological features of the Internet. However, itwas recently argued that sampling based on taking paths or traceroutes through the network froma small number of sources introduces a fundamental bias in the observed degree distribution. Weexamine this bias analytically and experimentally. For Erd˝os-R´enyirandom graphs with mean degreec, we show analytically that traceroute sampling gives an observed degree distribution P(k) ∼ k"
]
} |
1411.6061 | 2101905790 | The structure of real-world social networks in large part determines the evolution of social phenomena, including opinion formation, diffusion of information and influence, and the spread of disease. Globally, network structure is characterized by features such as degree distribution, degree assortativity, and clustering coefficient. However, information about global structure is usually not available to each vertex. Instead, each vertex's knowledge is generally limited to the locally observable portion of the network consisting of the subgraph over its immediate neighbors. Such subgraphs, known as ego networks, have properties that can differ substantially from those of the global network. In this paper, we study the structural properties of ego networks and show how they relate to the global properties of networks from which they are derived. Through empirical comparisons and mathematical derivations, we show that structural features, similar to static attributes, suffer from paradoxes. We quantify the differences between global information about network structure and local estimates. This knowledge allows us to better identify and correct the biases arising from incomplete local information. | Degree distribution is one of the best studied aspects of networks. Many real world networks display scale-free" @cite_8 @cite_5 , or power law degree distributions: where @math , @math and the range of the distribution is @math . The exponent @math is usually in the range @math . | {
"cite_N": [
"@cite_5",
"@cite_8"
],
"mid": [
"2000042664",
"2008620264"
],
"abstract": [
"Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution—the part of the distribution representing large but rare events—and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov (KS) statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data, while in others the power law is ruled out.",
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems."
]
} |
1411.6061 | 2101905790 | The structure of real-world social networks in large part determines the evolution of social phenomena, including opinion formation, diffusion of information and influence, and the spread of disease. Globally, network structure is characterized by features such as degree distribution, degree assortativity, and clustering coefficient. However, information about global structure is usually not available to each vertex. Instead, each vertex's knowledge is generally limited to the locally observable portion of the network consisting of the subgraph over its immediate neighbors. Such subgraphs, known as ego networks, have properties that can differ substantially from those of the global network. In this paper, we study the structural properties of ego networks and show how they relate to the global properties of networks from which they are derived. Through empirical comparisons and mathematical derivations, we show that structural features, similar to static attributes, suffer from paradoxes. We quantify the differences between global information about network structure and local estimates. This knowledge allows us to better identify and correct the biases arising from incomplete local information. | The complexity of real world networks does not stop at pair-wise correlations. Clustering coefficient goes one step further, capturing correlations among triplets of vertices @cite_14 . The local version is defined as the probability that a third edge between two neighbors of the same vertex @math would complete a triangle, with @math , where @math is the number triangles containing the vertex @math . We can aggregate @math over the set of vertices of a given degree @math , and get the degree dependent clustering coefficient @cite_25 , where @math is the number of vertices of degree k. In real world networks, it has been observed that @math is also a power law function of degree, @math where @math typically ranges from @math , with networks having strong hierarchical structures corresponding to @math @cite_1 . @math is a constant depending on global clustering coefficient @math . Given the degree distribution @math , we can recover @math where we only consider vertices with @math . | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_25"
],
"mid": [
"2112090702",
"2018049970",
"2151200985"
],
"abstract": [
"Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.",
"Abstract Many real networks in nature and society share two generic properties: they are scale-free and they display a high degree of clustering. We show that these two features are the consequence of a hierarchical organization, implying that small groups of nodes organize in a hierarchical manner into increasingly large groups, while maintaining a scale-free topology. In hierarchical networks, the degree of clustering characterizing the different groups follows a strict scaling law, which can be used to identify the presence of a hierarchical organization in real networks. We find that several real networks, such as the Worldwideweb, actor network, the Internet at the domain level, and the semantic web obey this scaling law, indicating that hierarchy is a fundamental characteristic of many complex systems.",
"We study the large-scale topological and dynamical properties of real Internet maps at the autonomous system level, collected in a 3-yr time interval. We find that the connectivity structure of the Internet presents statistical distributions settled in a well-defined stationary state. The large-scale properties are characterized by a scale-free topology consistent with previous observations. Correlation functions and clustering coefficients exhibit a remarkable structure due to the underlying hierarchical organization of the Internet. The study of the Internet time evolution shows a growth dynamics with aging features typical of recently proposed growing network models. We compare the properties of growing network models with the present real Internet data analysis."
]
} |
1411.6061 | 2101905790 | The structure of real-world social networks in large part determines the evolution of social phenomena, including opinion formation, diffusion of information and influence, and the spread of disease. Globally, network structure is characterized by features such as degree distribution, degree assortativity, and clustering coefficient. However, information about global structure is usually not available to each vertex. Instead, each vertex's knowledge is generally limited to the locally observable portion of the network consisting of the subgraph over its immediate neighbors. Such subgraphs, known as ego networks, have properties that can differ substantially from those of the global network. In this paper, we study the structural properties of ego networks and show how they relate to the global properties of networks from which they are derived. Through empirical comparisons and mathematical derivations, we show that structural features, similar to static attributes, suffer from paradoxes. We quantify the differences between global information about network structure and local estimates. This knowledge allows us to better identify and correct the biases arising from incomplete local information. | Being a third order correlation measure, clustering coefficient displays dependencies on both degree distribution and degree correlations or assortativity @cite_19 . The interplay between degree correlations and clustering is further complicated by the fact that each edge can form multiple triangles. It has been shown that negative degree correlations can limit the maximum value of @math , as triangles are less likely to appear with disassortative connections @cite_26 . | {
"cite_N": [
"@cite_19",
"@cite_26"
],
"mid": [
"1995052256",
"2016755074"
],
"abstract": [
"We study a class of models of correlated random networks in which vertices are characterized by hidden variables controlling the establishment of edges between pairs of vertices. We find analytical expressions for the main topological properties of these models as a function of the distribution of hidden variables and the probability of connecting vertices. The expressions obtained are checked by means of numerical simulations in a particular example. The general model is extended to describe a practical algorithm to generate random networks with an a priori specified correlation structure. We also present an extension of the class, to map nonequilibrium growing networks to networks with hidden variables that represent the time at which each vertex was introduced in the system.",
"We present a generator of random networks where both the degree-dependent clustering coefficient and the degree distribution are tunable. Following the same philosophy as in the configuration model, the degree distribution and the clustering coefficient for each class of nodes of degree @math are fixed ad hoc and a priori. The algorithm generates corresponding topologies by applying first a closure of triangles and second the classical closure of remaining free stubs. The procedure unveils an universal relation among clustering and degree-degree correlations for all networks, where the level of assortativity establishes an upper limit to the level of clustering. Maximum assortativity ensures no restriction on the decay of the clustering coefficient whereas disassortativity sets a stronger constraint on its behavior. Correlation measures in real networks are seen to observe this structural bound."
]
} |
1411.6061 | 2101905790 | The structure of real-world social networks in large part determines the evolution of social phenomena, including opinion formation, diffusion of information and influence, and the spread of disease. Globally, network structure is characterized by features such as degree distribution, degree assortativity, and clustering coefficient. However, information about global structure is usually not available to each vertex. Instead, each vertex's knowledge is generally limited to the locally observable portion of the network consisting of the subgraph over its immediate neighbors. Such subgraphs, known as ego networks, have properties that can differ substantially from those of the global network. In this paper, we study the structural properties of ego networks and show how they relate to the global properties of networks from which they are derived. Through empirical comparisons and mathematical derivations, we show that structural features, similar to static attributes, suffer from paradoxes. We quantify the differences between global information about network structure and local estimates. This knowledge allows us to better identify and correct the biases arising from incomplete local information. | By summing over the other index @math , we recover the average edge multiplicity of the network @math @cite_26 : The above equation and the concept of edge multiplicity will play an important role in our analysis of ego network structures. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2016755074"
],
"abstract": [
"We present a generator of random networks where both the degree-dependent clustering coefficient and the degree distribution are tunable. Following the same philosophy as in the configuration model, the degree distribution and the clustering coefficient for each class of nodes of degree @math are fixed ad hoc and a priori. The algorithm generates corresponding topologies by applying first a closure of triangles and second the classical closure of remaining free stubs. The procedure unveils an universal relation among clustering and degree-degree correlations for all networks, where the level of assortativity establishes an upper limit to the level of clustering. Maximum assortativity ensures no restriction on the decay of the clustering coefficient whereas disassortativity sets a stronger constraint on its behavior. Correlation measures in real networks are seen to observe this structural bound."
]
} |
1411.6228 | 2952218918 | We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches. | Labeling data for segmentation task is difficult if compared to labeling data for classification. For this reason, several weakly supervised object segmentation systems have been proposed in the past few years. For instance, Vezhnevets and Buhmann @cite_27 proposed an approach based on Semantic Texton Forest, derived in the context of MIL. However, the model fails to model relationship between superpixels. To model these relationships, @cite_5 introduced a graphical model -- named Multi-Image Model (MIM) -- to connect superpixels from all training images, based on their appearance similarity. The unary potentials of the MIM are initialized with the output of @cite_27 . | {
"cite_N": [
"@cite_5",
"@cite_27"
],
"mid": [
"2158427031",
"2029731618"
],
"abstract": [
"We propose a novel method for weakly supervised semantic segmentation. Training images are labeled only by the classes they contain, not by their location in the image. On test images instead, the method predicts a class label for every pixel. Our main innovation is a multi-image model (MIM) - a graphical model for recovering the pixel labels of the training images. The model connects superpixels from all training images in a data-driven fashion, based on their appearance similarity. For generalizing to new test images we integrate them into MIM using a learned multiple kernel metric, instead of learning conventional classifiers on the recovered pixel labels. We also introduce an “objectness” potential, that helps separating objects (e.g. car, dog, human) from background classes (e.g. grass, sky, road). In experiments on the MSRC 21 dataset and the LabelMe subset of [18], our technique outperforms previous weakly supervised methods and achieves accuracy comparable with fully supervised methods.",
"We address the task of learning a semantic segmentation from weakly supervised data. Our aim is to devise a system that predicts an object label for each pixel by making use of only image level labels during training – the information whether a certain object is present or not in the image. Such coarse tagging of images is faster and easier to obtain as opposed to the tedious task of pixelwise labeling required in state of the art systems. We cast this task naturally as a multiple instance learning (MIL) problem. We use Semantic Texton Forest (STF) as the basic framework and extend it for the MIL setting. We make use of multitask learning (MTL) to regularize our solution. Here, an external task of geometric context estimation is used to improve on the task of semantic segmentation. We report experimental results on the MSRC21 and the very challenging VOC2007 datasets. On MSRC21 dataset we are able, by using 276 weakly labeled images, to achieve the performance of a supervised STF trained on pixelwise labeled training set of 56 images, which is a significant reduction in supervision needed."
]
} |
1411.6228 | 2952218918 | We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches. | @cite_23 , the authors define a parametric family of structured models, where each model carries visual cues in a different way. A maximum expected agreement model selection principle evaluates the quality of a model from a family. An algorithm based on Gaussian processes is proposed to efficiency search the best model for different visual cues. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2026581312"
],
"abstract": [
"We address the problem of weakly supervised semantic segmentation. The training images are labeled only by the classes they contain, not by their location in the image. On test images instead, the method must predict a class label for every pixel. Our goal is to enable segmentation algorithms to use multiple visual cues in this weakly supervised setting, analogous to what is achieved by fully supervised methods. However, it is difficult to assess the relative usefulness of different visual cues from weakly supervised training data. We define a parametric family of structured models, were each model weights visual cues in a different way. We propose a Maximum Expected Agreement model selection principle that evaluates the quality of a model from the family without looking at superpixel labels. Searching for the best model is a hard optimization problem, which has no analytic gradient and multiple local optima. We cast it as a Bayesian optimization problem and propose an algorithm based on Gaussian processes to efficiently solve it. Our second contribution is an Extremely Randomized Hashing Forest that represents diverse superpixel features as a sparse binary vector. It enables using appearance models of visual classes that are fast at training and testing and yet accurate. Experiments on the SIFT-flow dataset show a significant improvement over previous weakly supervised methods and even over some fully supervised methods."
]
} |
1411.6228 | 2952218918 | We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches. | More recently, @cite_22 proposed an algorithm that learns the distribution of spatially structural superpixel sets from image-level labels. This is achieved by first extracting graphlets (small graphs consisting of superpixels and encapsulating their spatial structure) from a given image. Labels from the training images are transfered into graphlets throughout a proposed manifold embedding algorithm. A Gaussian mixture model is then used to learn the distribution of the post-embedding graphlets, vectors output from the graphlet embedding. The inference is done by leveraging the learned GMM prior to measure the structure homogeneity of a test image. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2066606526"
],
"abstract": [
"Weakly-supervised image segmentation is a challenging problem with multidisciplinary applications in multimedia content analysis and beyond. It aims to segment an image by leveraging its image-level semantics (i.e., tags). This paper presents a weakly-supervised image segmentation algorithm that learns the distribution of spatially structural superpixel sets from image-level labels. More specifically, we first extract graphlets from a given image, which are small-sized graphs consisting of superpixels and encapsulating their spatial structure. Then, an efficient manifold embedding algorithm is proposed to transfer labels from training images into graphlets. It is further observed that there are numerous redundant graphlets that are not discriminative to semantic categories, which are abandoned by a graphlet selection scheme as they make no contribution to the subsequent segmentation. Thereafter, we use a Gaussian mixture model (GMM) to learn the distribution of the selected post-embedding graphlets (i.e., vectors output from the graphlet embedding). Finally, we propose an image segmentation algorithm, termed representative graphlet cut, which leverages the learned GMM prior to measure the structure homogeneity of a test image. Experimental results show that the proposed approach outperforms state-of-the-art weakly-supervised image segmentation methods, on five popular segmentation data sets. Besides, our approach performs competitively to the fully-supervised segmentation models."
]
} |
1411.6228 | 2952218918 | We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches. | In the last few years, convolutional networks have been widely used in the context of object recognition. A notable system is the one from Krizhevsky al @cite_8 , which performs very well on Imagenet. @cite_1 the authors built upon Krizhevsky's approach and showed that a model trained for classification on Imagenet dataset can be used for classification in a different dataset (namely Pascal VOC) by taking into account the bounding box information. In a recent yet unpublished work @cite_19 , the authors adapt an Imagenet-trained CNN to the Pascal VOC classification task. The network is fine-tuned on Pascal VOC, by modifying the cost function to include a final max-pooling layer. Similar to our aggregation layer, the max-pooling outputs a single image-level score for each of the classes. In contrast, (1) we not limit ourselves to the Pascal VOC classification problem, but tackle the more challenging problem of segmentation and (2) our model is not fine-tuned on Pascal VOC. | {
"cite_N": [
"@cite_19",
"@cite_1",
"@cite_8"
],
"mid": [
"1828658979",
"2161381512",
"2163605009"
],
"abstract": [
"Successful visual object recognition methods typically rely on training datasets containing lots of richly annotated images. Annotating object bounding boxes is both expensive and subjective. We describe a weakly supervised convolutional neural network (CNN) for object recognition that does not rely on detailed object annotation and yet returns 86.3 mAP on the Pascal VOC classification task, outperforming previous fully-supervised systems by a sizable margin. Despite the lack of bounding box supervision, the network produces maps that clearly localize the objects in cluttered scenes. We also show that adding fully supervised object examples to our weakly supervised setup does not increase the classification performance.",
"Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry."
]
} |
1411.6228 | 2952218918 | We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches. | In the same spirit, Girshick al @cite_3 showed that a model trained for classification on Imagenet can be adapted for object detection on Pascal VOC. The authors proposed to combine bottom-up techniques for generating detection region candidates with CNNs. The authors achieved state-of-the-art performance in object detection. Based upon this work, @cite_21 derived a model that detects all instances of a category in an image and, for each instance, marks the pixels that belong to it. Their model, entitled SDS (Simultaneous Detection and Segmentation), uses category-specific, top-down figure-ground predictions to refine bottom-up detection candidates. | {
"cite_N": [
"@cite_21",
"@cite_3"
],
"mid": [
"2950612966",
"2102605133"
],
"abstract": [
"We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top- down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
} |
1411.6228 | 2952218918 | We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches. | As for these existing state-of-the-art approaches, our system leverages features learned over the Imagenet classification dataset. However, our approach differs from theirs in some important aspects. Compared to @cite_3 @cite_1 , we consider the more challenging problem of object segmentation and do not use any information other than the image-level annotation. @cite_19 consider a weakly supervised scenario, but only deals with the classification problem. Compared to @cite_21 , we consider only the the image-level annotation to infer the pixel-level one. In that respect, we do not use any segmentation information (our model is not refined over the segmentation data either), nor bounding box annotation during the training period. One could argue that a classification dataset like Imagenet has somewhat already cropped properly objects. While this might true for certain objects, it is not the case for many images, and in any case the bounding box'' remains quite loose. | {
"cite_N": [
"@cite_19",
"@cite_21",
"@cite_1",
"@cite_3"
],
"mid": [
"1828658979",
"2950612966",
"2161381512",
"2102605133"
],
"abstract": [
"Successful visual object recognition methods typically rely on training datasets containing lots of richly annotated images. Annotating object bounding boxes is both expensive and subjective. We describe a weakly supervised convolutional neural network (CNN) for object recognition that does not rely on detailed object annotation and yet returns 86.3 mAP on the Pascal VOC classification task, outperforming previous fully-supervised systems by a sizable margin. Despite the lack of bounding box supervision, the network produces maps that clearly localize the objects in cluttered scenes. We also show that adding fully supervised object examples to our weakly supervised setup does not increase the classification performance.",
"We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top- down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work.",
"Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
} |
1411.5547 | 2059981773 | The explosive growth of content-on-the-move, such as video streaming to mobile devices, has propelled research on multimedia broadcast and multicast schemes. Multirate transmission strategies have been proposed as a means of delivering layered services to users experiencing different downlink channel conditions. In this paper, we consider point-to-multipoint layered service delivery across a generic cellular system and improve it by applying different random linear network coding approaches. We derive packet error probability expressions and use them as performance metrics in the formulation of resource-allocation frameworks. The aim of these frameworks is both the optimization of the transmission scheme and the minimization of the number of broadcast packets on each downlink channel, while offering service guarantees to a predetermined fraction of users. As a case of study, our proposed frameworks are then adapted to the LTE-A standard and the eMBMS technology. We focus on the delivery of a video service based on the H.264 SVC standard and demonstrate the advantages of layered network coding over multirate transmission. Furthermore, we establish that the choice of both the network coding technique and the resource-allocation method play a critical role on the network footprint, as well as the quality of each received video layer. | Since each layer of a service has a different importance level, Unequal Error Protection (UEP) can be used to link the level of importance that a service layer has to the required level of protection. The UEP concept has been frequently applied to FEC schemes, see for example Reed-Solomon or low-density parity-check codes @cite_32 @cite_21 , but was later adapted for RLNC codes @cite_14 . This paper deals with two different UEP RLNC schemes @cite_14 : the (NOW-RLNC) and the RLNC (EW-RLNC). Coded packets associated with a service layer @math are generated from source packets of in the case of NOW-RLNC or from source packets of in the case of EW-RLNC. | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_32"
],
"mid": [
"2062063921",
"2118078737",
"2014057681"
],
"abstract": [
"In this paper, we provide the performance analysis of unequal error protection (UEP) random linear coding (RLC) strategies designed for transmission of source messages containing packets of different importance over lossy packet erasure links. By introducing the probabilistic encoding framework, we first derive the general performance limits for the packet-level UEP coding strategies that encode the packets of each importance class of the source message independently (non-overlapping windowing strategy) or jointly (expanding windowing strategy). Then, we demonstrate that the general performance limits of both strategies are achievable by the probabilistic encoding over non-overlapping and expanding windows based on RLC and the Gaussian Elimination (GE) decoding. Throughout the paper, we present a number of examples that investigate the performance and optimization of code design parameters of the expanding window RLC strategy and compare it with the non-overlapping RLC strategy selected as a reference.",
"The scalable video coding extension of H.264 AVC is a current standardization project. This paper deals with unequal error protection (UEP) scheme for scalable video bitstream over packet-lossy networks using forward error correction (FEC). The proposed UEP scheme is developed by exploiting jointly the unequal importance existing both in temporal layers and quality layers of hierarchial scalable video bitstream. For efficient assignment of FEC codes, the proposed UEP scheme uses a simple and efficient performance metric, namely layer-weighted expected zone of error propagation (LW-EZEP). The LW-EZEP is adopted for quantifying the error propagation effect on video quality degradation from packet loss in temporal layers and in quality layers. Compared to other UEP schemes, the proposed UEP scheme demonstrates strong robustness and adaptation for variable channel status.",
"Layered video coding is capable of progressively refining the reconstructed video quality with the aid of multiple layers of unequal importance. When the base layer (BL) is corrupted or lost due to channel impairments, the enhancement layers (ELs) must be discarded by the video decoder, regardless whether they are perfectly decoded or not, which implies that the transmission power assigned to the ELs is wasted. To circumvent this problem, we proposed a bit-level inter-layer forward error correction (IL-FEC) scheme for layered video transmission in our previous work, which implanted the systematic bits of the BL into the systematic bits of the ELs using exclusive-OR operations (XOR). This allowed the receiver to exploit the implanted bits of the ELs for assisting the BL's decoding and hence improved the overall system performance of our IL-FEC aided layered video scheme. In this treatise, we find the specific FEC coding rates in a real-time on-line fashion for the sake optimizing the overall system performance. The proposed procedure is widely applicable to diverse wireless transceivers and FEC codecs. Our simulation results show that the proposed optimized IL-FEC system outperforms the traditional optimal UEP by about 1.9 dB of Eb N0 at a peak signal-to-noise ratio (PSNR) of 38 dB. Viewing the improvements in terms of the video quality, 3.3 dB of PSNR improvement is attained at an Eb N0 of 10 dB, when employing a recursive systematic convolutional (RSC) code."
]
} |
1411.5547 | 2059981773 | The explosive growth of content-on-the-move, such as video streaming to mobile devices, has propelled research on multimedia broadcast and multicast schemes. Multirate transmission strategies have been proposed as a means of delivering layered services to users experiencing different downlink channel conditions. In this paper, we consider point-to-multipoint layered service delivery across a generic cellular system and improve it by applying different random linear network coding approaches. We derive packet error probability expressions and use them as performance metrics in the formulation of resource-allocation frameworks. The aim of these frameworks is both the optimization of the transmission scheme and the minimization of the number of broadcast packets on each downlink channel, while offering service guarantees to a predetermined fraction of users. As a case of study, our proposed frameworks are then adapted to the LTE-A standard and the eMBMS technology. We focus on the delivery of a video service based on the H.264 SVC standard and demonstrate the advantages of layered network coding over multirate transmission. Furthermore, we establish that the choice of both the network coding technique and the resource-allocation method play a critical role on the network footprint, as well as the quality of each received video layer. | In contrast to @cite_1 @cite_18 @cite_20 , our work refers to a typical cellular network topology, where the network coding operations are performed by the source node. Furthermore, this paper aims to jointly optimize the network coding process and the transmission parameters. In this way, we can view the RLNC implementation as a component which is fully integrated into the link adaptation framework of our communication system. Our proposal differs from @cite_4 both in terms of the considered RLNC strategies and the nature of the delivered data streams. More specifically, @cite_4 does not consider layered video services and, hence, does not investigate UEP RLNC strategies. Furthermore, the fact that the proposed scheme in @cite_4 has not been integrated into a more generic link adaptation framework hinders its extensibility to the case of PtM services. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_1",
"@cite_20"
],
"mid": [
"2167023551",
"2143834638",
"2150167333",
"2002887759"
],
"abstract": [
"We address the problem of prioritized video streaming over lossy overlay networks. We propose to exploit network path diversity via a novel randomized network coding (RNC) approach that provides unequal error protection (UEP) to the packets conveying the video content. We design a distributed receiver-driven streaming solution, where a client requests packets from the different priority classes from its neighbors in the overlay. Based on the received requests, a node in turn forwards combinations of the selected packets to the requesting peers. Choosing a network coding strategy at every node can be cast as an optimization problem that determines the rate allocation between the different packet classes such that the average distortion at the requesting peer is minimized. As the optimization problem has log-concavity properties, it can be solved with low complexity by an iterative algorithm. Our simulation results demonstrate that the proposed scheme respects the relative priorities of the different packet classes and achieves a graceful quality adaptation to network resource constraints. Therefore, our scheme substantially outperforms reference schemes such as baseline network coding techniques as well as solutions that employ rateless codes with built-in UEP properties. The performance evaluation provides additional evidence of the substantial robustness of the proposed scheme in a variety of transmission scenarios.",
"We formulate the problem of network-coding (NC)-based scheduling for media transmission to multiple users over a wireless-local-area-network-like or WiMAX-like network as a Markov decision process (MDP). NC is used to minimize the packet losses that resulted from unreliable wireless channel conditions, whereas the MDP is employed to find the optimal policy for transmissions of unequally important media packets. Based on this, a dynamic programming technique is used to give an optimal transmission policy. However, this dynamic programming technique quickly leads to computational intractability, even for scenarios with a moderate number of receivers. To address this problem, we further propose a simulation-based dynamic programming algorithm that has a much lower run time yet empirically converges quickly to the optimal solution.",
"In this paper, we study video streaming over wireless networks with network coding capabilities. We build upon recent work, which demonstrated that network coding can increase throughput over a broadcast medium, by mixing packets from different flows into a single packet, thus increasing the information content per transmission. Our key insight is that, when the transmitted flows are video streams, network codes should be selected so as to maximize not only the network throughput but also the video quality. We propose video-aware opportunistic network coding schemes that take into account both the decodability of network codes by several receivers and the importance and deadlines of video packets. Simulation results show that our schemes significantly improve both video quality and throughput. This work is a first step towards content-aware network coding.",
"Recent years have witnessed an explosive growth in multimedia streaming applications over the Internet. Notably, Content Delivery Networks (CDN) and Peer-to-Peer (P2P) networks have emerged as two effective paradigms for delivering multimedia contents over the Internet. One salient feature shared between these two networks is the inherent support for path diversity streaming where a receiver receives multiple streams simultaneously on different network paths as a result of having multiple senders. In this paper, we propose a network coding framework for efficient video streaming in CDNs and P2P networks in which, multiple servers peers are employed to simultaneously stream a video to a single receiver. We show that network coding techniques can (a) eliminate the need for tight synchronization between the senders, (b) be integrated easily with TCP, and (c) reduce server's storage in CDN settings. Importantly, we propose the Hierarchical Network Coding (HNC) technique to be used with scalable video bit stream to combat bandwidth fluctuation on the Internet. Simulations demonstrate that under certain scenarios, our proposed network coding techniques can result in bandwidth saving up to 60 over the traditional schemes."
]
} |
1411.5547 | 2059981773 | The explosive growth of content-on-the-move, such as video streaming to mobile devices, has propelled research on multimedia broadcast and multicast schemes. Multirate transmission strategies have been proposed as a means of delivering layered services to users experiencing different downlink channel conditions. In this paper, we consider point-to-multipoint layered service delivery across a generic cellular system and improve it by applying different random linear network coding approaches. We derive packet error probability expressions and use them as performance metrics in the formulation of resource-allocation frameworks. The aim of these frameworks is both the optimization of the transmission scheme and the minimization of the number of broadcast packets on each downlink channel, while offering service guarantees to a predetermined fraction of users. As a case of study, our proposed frameworks are then adapted to the LTE-A standard and the eMBMS technology. We focus on the delivery of a video service based on the H.264 SVC standard and demonstrate the advantages of layered network coding over multirate transmission. Furthermore, we establish that the choice of both the network coding technique and the resource-allocation method play a critical role on the network footprint, as well as the quality of each received video layer. | With regards to the coding schemes that we will refer to, unlike @cite_19 and @cite_29 , this work focuses on NOW- and EW-RLNC schemes suitable for layered service transmissions. In addition, the authors of @cite_19 @cite_29 did not optimize the bit length of source packets used to represent the transmitted layered service; the source packet bit length is given a priori. This paper proposes a model for optimizing the source packet bit length to fit the transmission constraints of the communication standard in use. Since the bit length of source packets is constrained to be smaller than or equal to a maximum target value, the number of source packets representing a layered service can be upper-bounded. Hence, this work can represent the same layered service with a smaller number of source packets, compared to what proposed in @cite_29 . We remark that the number of source packets has a significant impact on the computation complexity of the RLNC decoding phase @cite_25 . | {
"cite_N": [
"@cite_19",
"@cite_29",
"@cite_25"
],
"mid": [
"1970353287",
"2134323462",
"636730446"
],
"abstract": [
"Recent trends witness the shift of the 3GPP Long Term Evolution - Advanced (LTE-A) radio access network (RAN) architecture from a traditional macro-cellular layout towards smaller base stations moving closer to end users. The evolved LTE-A RAN offers rich environment for multi-point and multi-hop cooperation and coordination resulting in increased capacity and more predictable channel conditions between heterogeneous base stations and end users. While these opportunities are recently well investigated at the physical layer through various cooperative multi-point (CoMP) schemes, upper layer protocols preserve the design proposed for macro-cellular single-hop data delivery. In this paper, we address this issue by proposing and investigating in detail a cooperative RAN-wide MAC layer protocol based on random network coding (RNC) that is designed specifically for reliable and flexible data delivery over the evolved LTE-A RAN. The proposed RNC-based MAC protocol (MAC-RNC) is evaluated and compared with the existing HARQ-based (MAC-HARQ) protocol in various LTE-A RAN layouts using a customized packet-based link-level simulator based on Finite-State Markov Chain (FSMC) channel models. Our results show that the MAC-RNC protocol introduces simplicity and flexibility required for future LTE-A RANs, while preserving or improving the performance of the MAC-HARQ protocol in traditional single-point single-hop macro-cellular scenarios.",
"Video service delivery over 3GPP Long Term Evolution-Advanced (LTE-A) networks is gaining momentum with the adoption of the evolved Multimedia Broadcast Multicast Service (eMBMS). In this paper, we address the challenge of optimizing the radio resource allocation process so that heterogeneous groups of users, according to their propagation conditions, can receive layered video streams at predefined and progressively decreasing service levels matched to respective user groups. A key aspect of the proposed system model is that video streams are delivered as eMBMS flows that utilize the random linear network coding (NC) principle. Furthermore, the transmission rate and NC scheme of each eMBMS flow are jointly optimized. The simulation results show that the proposed strategy can exploit user heterogeneity to optimize the allocated radio resources while achieving desired service levels for different user groups.",
"Network coding is a field of information and coding theory and is a method of attaining maximum information flow in a network. This book is an ideal introduction for the communications and network engineer, working in research and development, who needs an intuitive introduction to network coding and to the increased performance and reliability it offers in many applications. This book is an ideal introduction for the research and development communications and network engineer who needs an intuitive introduction to the theory and wishes to understand the increased performance and reliability it offers over a number of applications. This title provides a clear and intuitive introduction to network coding, avoiding difficult mathematics, which does not require a background in information theory. It lays emphasis on how network coding techniques can be implemented, using a wide range of applications in communications and network engineering. It provides a detailed coverage on content distribution networks, peer-to-peer networks, overlay networks, streaming and multimedia applications, storage networks, network security and military networks, reliable communication, wireless networks, delay-tolerant and disruption-tolerant networks, cellular and ad hoc networks (including LTE and WiMAX), and connections with data compression and compressed sensing. It is edited and contributed by the world's leading experts."
]
} |
1411.5731 | 1875160599 | Images have become one of the most popular types of media through which users convey their emotions within online social networks. Although vast amount of research is devoted to sentiment analysis of textual data, there has been very limited work that focuses on analyzing sentiment of image data. In this work, we propose a novel visual sentiment prediction framework that performs image understanding with Deep Convolutional Neural Networks (CNN). Specifically, the proposed sentiment prediction framework performs transfer learning from a CNN with millions of parameters, which is pre-trained on large-scale data for object recognition. Experiments conducted on two real-world datasets from Twitter and Tumblr demonstrate the effectiveness of the proposed visual sentiment analysis framework. | While research on sentiment prediction of visual content is far behind, extensive research has been conducted on opinion mining and sentiment analysis of text, and a comprehensive survey can be found in @cite_8 . Previous work on visual sentiment analysis has mostly been conducted to develop mid-level attributes for selecting features from low-level image features. @cite_3 generated mid-level attributes from scene and facial expression dataset to describe the visual phenomena in a scene perspective as well as incorporating facial emotion detectors when faces are present in the image. @cite_0 built large-scale Visual Sentiment Ontology based on psychological theories and web mining and trained detectors of selected visual concepts for sentiment analysis. @cite_19 evaluated the performance of different low-level descriptors and mid-level attributes as visual features for sentiment classification and showed that semantic-level clues are effective for predicting emotions. The major drawback for those approaches is that the training process requires lots of domain knowledge of psychology or linguistics to define the mid-level attributes, and human intervention to fine tune the sentiment prediction results. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_3",
"@cite_8"
],
"mid": [
"2075456404",
"2188687388",
"2046682605",
""
],
"abstract": [
"We address the challenge of sentiment analysis from visual content. In contrast to existing methods which infer sentiment or emotion directly from visual low-level features, we propose a novel approach based on understanding of the visual concepts that are strongly related to sentiments. Our key contribution is two-fold: first, we present a method built upon psychological theories and web mining to automatically construct a large-scale Visual Sentiment Ontology (VSO) consisting of more than 3,000 Adjective Noun Pairs (ANP). Second, we propose SentiBank, a novel visual concept detector library that can be used to detect the presence of 1,200 ANPs in an image. The VSO and SentiBank are distinct from existing work and will open a gate towards various applications enabled by automatic sentiment analysis. Experiments on detecting sentiment of image tweets demonstrate significant improvement in detection accuracy when comparing the proposed SentiBank based predictors with the text-based approaches. The effort also leads to a large publicly available resource consisting of a visual sentiment ontology, a large detector library, and the training testing benchmark for visual sentiment analysis.",
"User-generated video collections are expanding rapidly in recent years, and systems for automatic analysis of these collections are in high demands. While extensive research efforts have been devoted to recognizing semantics like \"birthday party\" and \"skiing\", little attempts have been made to understand the emotions carried by the videos, e.g., \"joy\" and \"sadness\". In this paper, we propose a comprehensive computational framework for predicting emotions in user-generated videos. We first introduce a rigorously designed dataset collected from popular video-sharing websites with manual annotations, which can serve as a valuable benchmark for future research. A large set of features are extracted from this dataset, ranging from popular low-level visual descriptors, audio features, to high-level semantic attributes. Results of a comprehensive set of experiments indicate that combining multiple types of features--such as the joint use of the audio and visual clues--is important, and attribute features such as those containing sentiment-level semantics are very effective.",
"Visual content analysis has always been important yet challenging. Thanks to the popularity of social networks, images become an convenient carrier for information diffusion among online users. To understand the diffusion patterns and different aspects of the social images, we need to interpret the images first. Similar to textual content, images also carry different levels of sentiment to their viewers. However, different from text, where sentiment analysis can use easily accessible semantic and context information, how to extract and interpret the sentiment of an image remains quite challenging. In this paper, we propose an image sentiment prediction framework, which leverages the mid-level attributes of an image to predict its sentiment. This makes the sentiment classification results more interpretable than directly using the low-level features of an image. To obtain a better performance on images containing faces, we introduce eigenface-based facial expression detection as an additional mid-level attributes. An empirical study of the proposed framework shows improved performance in terms of prediction accuracy. More importantly, by inspecting the prediction results, we are able to discover interesting relationships between mid-level attribute and image sentiment.",
""
]
} |
1411.5731 | 1875160599 | Images have become one of the most popular types of media through which users convey their emotions within online social networks. Although vast amount of research is devoted to sentiment analysis of textual data, there has been very limited work that focuses on analyzing sentiment of image data. In this work, we propose a novel visual sentiment prediction framework that performs image understanding with Deep Convolutional Neural Networks (CNN). Specifically, the proposed sentiment prediction framework performs transfer learning from a CNN with millions of parameters, which is pre-trained on large-scale data for object recognition. Experiments conducted on two real-world datasets from Twitter and Tumblr demonstrate the effectiveness of the proposed visual sentiment analysis framework. | The estimation of CNN parameters requires a very large amount of annotated data. There has been extensive work that perform transfer learning across different domains. @cite_2 reported success with transferring deep representations to small datasets as CIFAR and MINST. Recent studies @cite_6 @cite_7 show that the parameters of CNN trained on large-scale dataset such as ILSVRC can be transferred to object recognition and scene classification tasks when the data is limited, resulting better performance than traditional hand-engineered representations. Our work is motivated by @cite_7 a lot, and we apply the concept of transfer learning Deep CNN from large-scale image classification to the problem of sentiment prediction. | {
"cite_N": [
"@cite_7",
"@cite_6",
"@cite_2"
],
"mid": [
"2161381512",
"2953360861",
"2950789693"
],
"abstract": [
"Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"We consider the problem of building high- level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 bil- lion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a clus- ter with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental re- sults reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bod- ies. Starting with these learned features, we trained our network to obtain 15.8 accu- racy in recognizing 20,000 object categories from ImageNet, a leap of 70 relative im- provement over the previous state-of-the-art."
]
} |
1411.5654 | 2122180654 | In this paper we explore the bi-directional mapping between images and their sentence-based descriptions. We propose learning this mapping using a recurrent neural network. Unlike previous approaches that map both sentences and images to a common embedding, we enable the generation of novel sentences given an image. Using the same model, we can also reconstruct the visual features associated with an image given its visual description. We use a novel recurrent visual memory that automatically learns to remember long-term visual concepts to aid in both sentence generation and visual feature reconstruction. We evaluate our approach on several tasks. These include sentence generation, sentence retrieval and image retrieval. State-of-the-art results are shown for the task of generating novel image descriptions. When compared to human generated captions, our automatically generated captions are preferred by humans over @math of the time. Results are better than or comparable to state-of-the-art results on the image and sentence retrieval tasks for methods using similar visual features. | The task of building a visual memory lies at the heart of two long-standing AI-hard problems: grounding natural language symbols to the physical world and semantically understanding the content of an image. Whereas learning the mapping between image patches and single text labels remains a popular topic in computer vision @cite_1 @cite_5 @cite_34 , there is a growing interest in using entire sentence descriptions together with pixels to learn joint embeddings @cite_32 @cite_24 @cite_18 @cite_13 . Viewing corresponding text and images as correlated, KCCA @cite_32 is a natural option to discover the shared features spaces. However, given the highly non-linear mapping between the two, finding a generic distance metric based on shallow representations can be extremely difficult. Recent papers seek better objective functions that directly optimize the ranking @cite_32 , or directly adopts pre-trained representations @cite_24 to simplify the learning, or a combination of the two @cite_18 @cite_13 . | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_32",
"@cite_24",
"@cite_5",
"@cite_34",
"@cite_13"
],
"mid": [
"2953276893",
"",
"68733909",
"2149557440",
"2123024445",
"2102605133",
"92662927"
],
"abstract": [
"We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.",
"",
"The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.",
"Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.",
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"This paper studies the problem of associating images with descriptive sentences by embedding them in a common latent space. We are interested in learning such embeddings from hundreds of thousands or millions of examples. Unfortunately, it is prohibitively expensive to fully annotate this many training images with ground-truth sentences. Instead, we ask whether we can learn better image-sentence embeddings by augmenting small fully annotated training sets with millions of images that have weak and noisy annotations (titles, tags, or descriptions). After investigating several state-of-the-art scalable embedding methods, we introduce a new algorithm called Stacked Auxiliary Embedding that can successfully transfer knowledge from millions of weakly annotated images to improve the accuracy of retrieval-based image description."
]
} |
1411.5654 | 2122180654 | In this paper we explore the bi-directional mapping between images and their sentence-based descriptions. We propose learning this mapping using a recurrent neural network. Unlike previous approaches that map both sentences and images to a common embedding, we enable the generation of novel sentences given an image. Using the same model, we can also reconstruct the visual features associated with an image given its visual description. We use a novel recurrent visual memory that automatically learns to remember long-term visual concepts to aid in both sentence generation and visual feature reconstruction. We evaluate our approach on several tasks. These include sentence generation, sentence retrieval and image retrieval. State-of-the-art results are shown for the task of generating novel image descriptions. When compared to human generated captions, our automatically generated captions are preferred by humans over @math of the time. Results are better than or comparable to state-of-the-art results on the image and sentence retrieval tasks for methods using similar visual features. | With a good distance metric, it is possible to perform tasks like bi-directional image-sentence retrieval. However, in many scenarios it is also desired to generate novel image descriptions and to hallucinate a scene given a sentence description. Numerous papers have explored the area of generating novel image descriptions @cite_23 @cite_31 @cite_20 @cite_2 @cite_10 @cite_37 @cite_30 @cite_26 . These papers use various approaches to generate text, such as using pre-trained object detectors with template-based sentence generation @cite_31 @cite_23 @cite_20 . Retrieved sentences may be combined to form novel descriptions @cite_30 . Recently, purely statistical models have been used to generate sentences based on sampling @cite_26 or recurrent neural networks @cite_17 . While @cite_17 also uses a RNN, their model is significantly different from our model. Specifically their RNN does not attempt to reconstruct the visual features, and is more similar to the contextual RNN of @cite_7 . For the synthesizing of images from sentences, the recent paper by Zitnick al @cite_35 uses abstract clip art images to learn the visual interpretation of sentences. Relation tuples are extracted from the sentences and a conditional random field is used to model the visual scene. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_37",
"@cite_26",
"@cite_7",
"@cite_23",
"@cite_2",
"@cite_31",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"2149172860",
"1996418862",
"956551720",
"2171361956",
"1999965501",
"1897761818",
"1987835821",
"1858383477",
"8316075",
"2066134726",
"2159243025"
],
"abstract": [
"We present a holistic data-driven approach to image description generation, exploiting the vast amount of (noisy) parallel image data and associated natural language descriptions available on the web. More specifically, given a query image, we retrieve existing human-composed phrases used to describe visually similar images, then selectively combine those phrases to generate a novel description for the query image. We cast the generation process as constraint optimization problems, collectively incorporating multiple interconnected aspects of language composition for content planning, surface realization and discourse structure. Evaluation by human annotators indicates that our final system generates more semantically correct and linguistically appealing descriptions than two nontrivial baselines.",
"Relating visual information to its linguistic semantic meaning remains an open and challenging area of research. The semantic meaning of images depends on the presence of objects, their attributes and their relations to other objects. But precisely characterizing this dependence requires extracting complex visual information from an image, which is in general a difficult and yet unsolved problem. In this paper, we propose studying semantic information in abstract images created from collections of clip art. Abstract images provide several advantages. They allow for the direct study of how to infer high-level semantic information, since they remove the reliance on noisy low-level object, attribute and relation detectors, or the tedious hand-labeling of images. Importantly, abstract images also allow the ability to generate sets of semantically similar scenes. Finding analogous sets of semantically similar real images would be nearly impossible. We create 1,002 sets of 10 semantically similar abstract scenes with corresponding written descriptions. We thoroughly analyze this dataset to discover semantically important features, the relations of words to visual features and methods for measuring semantic similarity.",
"In this paper, we address the problem of automatically generating human-like descriptions for unseen images, given a collection of images and their corresponding human-generated descriptions. Previous attempts for this task mostly rely on visual clues and corpus statistics, but do not take much advantage of the semantic information inherent in the available image descriptions. Here, we present a generic method which benefits from all these three sources (i. e. visual clues, corpus statistics and available descriptions) simultaneously, and is capable of constructing novel descriptions. Our approach works on syntactically and linguistically motivated phrases extracted from the human descriptions. Experimental evaluations demonstrate that our formulation mostly generates lucid and semantically correct descriptions, and significantly outperforms the previous methods on automatic evaluation metrics. One of the significant advantages of our approach is that we can generate multiple interesting descriptions for an image. Unlike any previous work, we also test the applicability of our method on a large dataset containing complex images with rich descriptions.",
"We introduce two multimodal neural language models: models of natural language that can be conditioned on other modalities. An image-text multimodal neural language model can be used to retrieve images given complex sentence queries, retrieve phrase descriptions given image queries, as well as generate text conditioned on images. We show that in the case of image-text modelling we can jointly learn word representations and image features by training our models together with a convolutional network. Unlike many of the existing methods, our approach can generate sentence descriptions for images without the use of templates, structured prediction, and or syntactic trees. While we focus on imagetext modelling, our algorithms can be easily applied to other modalities such as audio.",
"Recurrent neural network language models (RNNLMs) have recently demonstrated state-of-the-art performance across a variety of tasks. In this paper, we improve their performance by providing a contextual real-valued input vector in association with each word. This vector is used to convey contextual information about the sentence being modeled. By performing Latent Dirichlet Allocation using a block of preceding text, we achieve a topic-conditioned RNNLM. This approach has the key advantage of avoiding the data fragmentation associated with building multiple topic models on different data subsets. We report perplexity results on the Penn Treebank data, where we achieve a new state-of-the-art. We further apply the model to the Wall Street Journal speech recognition task, where we observe improvements in word-error-rate.",
"Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche.",
"In this paper, we present an image parsing to text description (I2T) framework that generates text descriptions of image and video content based on image understanding. The proposed I2T framework follows three steps: 1) input images (or video frames) are decomposed into their constituent visual patterns by an image parsing engine, in a spirit similar to parsing sentences in natural language; 2) the image parsing results are converted into semantic representation in the form of Web ontology language (OWL), which enables seamless integration with general knowledge bases; and 3) a text generation engine converts the results from previous steps into semantically meaningful, human readable, and query-able text reports. The centerpiece of the I2T framework is an and-or graph (AoG) visual knowledge representation, which provides a graphical representation serving as prior knowledge for representing diverse visual patterns and provides top-down hypotheses during the image parsing. The AoG embodies vocabularies of visual elements including primitives, parts, objects, scenes as well as a stochastic image grammar that specifies syntactic relations (i.e., compositional) and semantic relations (e.g., categorical, spatial, temporal, and functional) between these visual elements. Therefore, the AoG is a unified model of both categorical and symbolic representations of visual knowledge. The proposed I2T framework has two objectives. First, we use semiautomatic method to parse images from the Internet in order to build an AoG for visual knowledge representation. Our goal is to make the parsing process more and more automatic using the learned AoG model. Second, we use automatic methods to parse image video in specific domains and generate text reports that are useful for real-world applications. In the case studies at the end of this paper, we demonstrate two automatic I2T systems: a maritime and urban scene video surveillance system and a real-time automatic driving scene understanding system.",
"We propose a sentence generation strategy that describes images by predicting the most likely nouns, verbs, scenes and prepositions that make up the core sentence structure. The input are initial noisy estimates of the objects and scenes detected in the image using state of the art trained detectors. As predicting actions from still images directly is unreliable, we use a language model trained from the English Gigaword corpus to obtain their estimates; together with probabilities of co-located nouns, scenes and prepositions. We use these estimates as parameters on a HMM that models the sentence generation process, with hidden nodes as sentence components and image detections as the emissions. Experimental results show that our strategy of combining vision and language produces readable and descriptive sentences compared to naive strategies that use vision alone.",
"This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.",
"We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.",
"In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12, Flickr 8K, and Flickr 30K). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval."
]
} |
1411.5654 | 2122180654 | In this paper we explore the bi-directional mapping between images and their sentence-based descriptions. We propose learning this mapping using a recurrent neural network. Unlike previous approaches that map both sentences and images to a common embedding, we enable the generation of novel sentences given an image. Using the same model, we can also reconstruct the visual features associated with an image given its visual description. We use a novel recurrent visual memory that automatically learns to remember long-term visual concepts to aid in both sentence generation and visual feature reconstruction. We evaluate our approach on several tasks. These include sentence generation, sentence retrieval and image retrieval. State-of-the-art results are shown for the task of generating novel image descriptions. When compared to human generated captions, our automatically generated captions are preferred by humans over @math of the time. Results are better than or comparable to state-of-the-art results on the image and sentence retrieval tasks for methods using similar visual features. | There are numerous papers using recurrent neural networks for language modeling @cite_22 @cite_8 @cite_7 @cite_26 . We build most directly on top of @cite_22 @cite_8 @cite_7 that use RNNs to learn word context. Several models use other sources of contextual information to help inform the language model @cite_7 @cite_26 . Despite its success, RNNs still have difficulty capturing long-range relationships in sequential modeling @cite_28 . One solution is Long Short-Term Memory (LSTM) networks @cite_12 @cite_4 @cite_26 , which use gates'' to control gradient back-propagation explicitly and allow for the learning of long-term interactions. However, the main focus of this paper is to show that the hidden layers learned by translating'' between multiple modalities can already discover rich structures in the data and learn long distance relations in an automatic, data-driven manner. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_12"
],
"mid": [
"2171361956",
"196214544",
"100623710",
"1999965501",
"179875071",
"",
""
],
"abstract": [
"We introduce two multimodal neural language models: models of natural language that can be conditioned on other modalities. An image-text multimodal neural language model can be used to retrieve images given complex sentence queries, retrieve phrase descriptions given image queries, as well as generate text conditioned on images. We show that in the case of image-text modelling we can jointly learn word representations and image features by training our models together with a convolutional network. Unlike many of the existing methods, our approach can generate sentence descriptions for images without the use of templates, structured prediction, and or syntactic trees. While we focus on imagetext modelling, our algorithms can be easily applied to other modalities such as audio.",
"Recurrent Neural Networks (RNNs) are very powerful sequence models that do not enjoy widespread use because it is extremely difficult to train them properly. Fortunately, recent advances in Hessian-free optimization have been able to overcome the difficulties associated with training RNNs, making it possible to apply them successfully to challenging sequence problems. In this paper we demonstrate the power of RNNs trained with the new Hessian-Free optimizer (HF) by applying them to character-level language modeling tasks. The standard RNN architecture, while effective, is not ideally suited for such tasks, so we introduce a new RNN variant that uses multiplicative (or \"gated\") connections which allow the current input character to determine the transition matrix from one hidden state vector to the next. After training the multiplicative RNN with the HF optimizer for five days on 8 high-end Graphics Processing Units, we were able to surpass the performance of the best previous single method for character-level language modeling – a hierarchical non-parametric sequence model. To our knowledge this represents the largest recurrent neural network application to date.",
"A central goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on several methods to speed-up both training and probability computation, as well as comparative experiments to evaluate the improvements brought by these techniques. We finally describe the incorporation of this new language model into a state-of-the-art speech recognizer of conversational speech.",
"Recurrent neural network language models (RNNLMs) have recently demonstrated state-of-the-art performance across a variety of tasks. In this paper, we improve their performance by providing a contextual real-valued input vector in association with each word. This vector is used to convey contextual information about the sentence being modeled. By performing Latent Dirichlet Allocation using a block of preceding text, we achieve a topic-conditioned RNNLM. This approach has the key advantage of avoiding the data fragmentation associated with building multiple topic models on different data subsets. We report perplexity results on the Penn Treebank data, where we achieve a new state-of-the-art. We further apply the model to the Wall Street Journal speech recognition task, where we observe improvements in word-error-rate.",
"A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition",
"",
""
]
} |
1411.5556 | 1864645954 | The paper concerns the general linear one-dimensional second-order hyperbolic equation ( _t^2 u - a^2 ( x, ,t ) _x^2 u + a_1 ( x, ,t ) _t u + a_2 ( x, ,t ) _x u + a_3 ( x, ,t )u = f ( x, ,t ), ,x ( 0, ,1 ) ) with periodic conditions in time and Robin boundary conditions in space. Under a non-resonance condition (formulated in terms of the coefficients a, a1, and a2) ruling out the small divisors effect, we prove the Fredholm alternative. Moreover, we show that the solutions have higher regularity if the data have higher regularity and if additional non-resonance conditions are fulfilled. Finally, we state a result about smooth dependence on the data, where perturbations of the coefficient a lead to the known loss of smoothness while perturbations of the coefficients a1, a2, and a3 do not. | In @cite_13 we applied our results from @cite_1 to prove a Hopf bifurcation theorem for semilinear hyperbolic systems. | {
"cite_N": [
"@cite_1",
"@cite_13"
],
"mid": [
"2315091680",
"2023763411"
],
"abstract": [
"This paper concerns linear first-order hyperbolic systems in one space dimension of the type @math with periodicity conditions in time and reflection boundary conditions in space. We state a non-resonance condition (depending on the coefficients @math and @math and the boundary reflection coefficients), which implies Fredholm solvability of the problem in the space of continuous functions. Further, we state one more non-resonance condition (depending also on @math ), which implies @math -solution regularity. Moreover, we give examples showing that both non-resonance conditions cannot be dropped, in general. Those conditions are robust under small perturbations of the problem data. Our results work for many non-strictly hyperbolic systems, but they are new even in the case of strict hyperbolicity.",
"Abstract We consider boundary value problems for semilinear hyperbolic systems of the type ∂ t u j + a j ( x , λ ) ∂ x u j + b j ( x , λ , u ) = 0 , x ∈ ( 0 , 1 ) , j = 1 , … , n , with smooth coefficient functions a j and b j such that b j ( x , λ , 0 ) = 0 for all x ∈ [ 0 , 1 ] , λ ∈ R , and j = 1 , … , n . We state conditions for Hopf bifurcation, i.e., for existence, local uniqueness (up to phase shifts), smoothness and smooth dependence on λ of time-periodic solutions bifurcating from the zero stationary solution. Furthermore, we derive a formula which determines the bifurcation direction. The proof is done by means of a Liapunov–Schmidt reduction procedure. For this purpose, Fredholm properties of the linearized system and implicit function theorem techniques are used."
]
} |
1411.5657 | 2133131808 | We analyze the detection and classification of singularities of functions @math , where @math and @math . It will be shown how the set @math can be extracted by a continuous shearlet transform associated with compactly supported shearlets. Furthermore, if @math is a @math dimensional piecewise smooth manifold with @math or @math , we will classify smooth and non-smooth components of @math . This improves previous results given for shearlet systems with a certain band-limited generator, since the estimates we derive are uniform. Moreover, we will show that our bounds are optimal. Along the way, we also obtain novel results on the characterization of wavefront sets in @math dimensions by compactly supported shearlets. Finally, geometric properties of @math such as curvature are described in terms of the continuous shearlet transform of @math . | Historically first, it has been shown in @cite_18 for a special shearlet system (and later extended in @cite_2 ), that shearlets are able to detect the wavefront set of a distribution in 2D. In terms of images, this implies that the shearlet transform can distinguish between points corresponding to smooth or discontinuous parts of the image in a sense that also incorporates the direction of the discontinuity. This allows to deal with edges in a geometrically more meaningful way. A particularly beautiful application can be found in @cite_17 , in which such results are utilized to separate crossing singularities with different orientations. | {
"cite_N": [
"@cite_18",
"@cite_17",
"@cite_2"
],
"mid": [
"1990805796",
"1970224632",
"2025423598"
],
"abstract": [
"It is known that the Continuous Wavelet Transform of a distribution f decays rapidly near the points where f is smooth, while it decays slowly near the irregular points. This property allows the identification of the singular support of f. However, the Continuous Wavelet Transform is unable to describe the geometry of the set of singularities of f and, in particular, identify the wavefront set of a distribution. In this paper, we employ the same framework of affine systems which is at the core of the construction of the wavelet transform to introduce the Continuous Shearlet Transform. This is defined by SH ψ f(a,s,t) = (fψ ast ), where the analyzing elements ψ ast are dilated and translated copies of a single generating function ψ. The dilation matrices form a two-parameter matrix group consisting of products of parabolic scaling and shear matrices. We show that the elements ψ ast form a system of smooth functions at continuous scales a > 0, locations t ∈ R 2 , and oriented along lines of slope s ∈ R in the frequency domain. We then prove that the Continuous Shearlet Transform does exactly resolve the wavefront set of a distribution f.",
"In many x-ray images, we find crossing structures, for example the rib bones of a thorax image. Hence, many edges in such images intersect with other edges. We propose a new algorithm, which simultaneously detects and separates such crossing edges. We prove the functionality of the algorithm under the geometric assumption that the edges possess different orientations at the points of intersection. To this end, we model edges as singularities of distributions with a submanifold structure and define the edge orientation to be the orientation of the corresponding singularity in the sense of microlocal analysis. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)",
"In recent years directional multiscale transformations like the curvelet- or shearlet transformation have gained considerable attention. The reason for this is that these transforms are—unlike more traditional transforms like wavelets—able to efficiently handle data with features along edges. The main result in Kutyniok and Labate (Trans. Am. Math. Soc. 361:2719–2754, 2009) confirming this property for shearlets is due to Kutyniok and Labate where it is shown that for very special functions ψ with frequency support in a compact conical wegde the decay rate of the shearlet coefficients of a tempered distribution f with respect to the shearlet ψ can resolve the wavefront set of f. We demonstrate that the same result can be verified under much weaker assumptions on ψ, namely to possess sufficiently many anisotropic vanishing moments. We also show how to build frames for ( L^2( R ^2) ) from any such function. To prove our statements we develop a new approach based on an adaption of the Radon transform to the shearlet structure."
]
} |
1411.5657 | 2133131808 | We analyze the detection and classification of singularities of functions @math , where @math and @math . It will be shown how the set @math can be extracted by a continuous shearlet transform associated with compactly supported shearlets. Furthermore, if @math is a @math dimensional piecewise smooth manifold with @math or @math , we will classify smooth and non-smooth components of @math . This improves previous results given for shearlet systems with a certain band-limited generator, since the estimates we derive are uniform. Moreover, we will show that our bounds are optimal. Along the way, we also obtain novel results on the characterization of wavefront sets in @math dimensions by compactly supported shearlets. Finally, geometric properties of @math such as curvature are described in terms of the continuous shearlet transform of @math . | Parallel to these results, a whole series of publications were devoted to the classification of different types of singularities, see @cite_10 , @cite_19 , and @cite_9 . In these works, a characteristic function of bounded domains @math with piecewise smooth boundary @math is used as an image model. The boundary @math then models a singularity of the image @math . It is shown that we can infer @math , the orientation of the singularity, as well as points in which @math is not given as a smooth curve, from the continuous shearlet transform. In particular, the decay of the shearlet transform with generator @math of the image @math at the position @math with orientation @math and for decreasing scale @math is given by @math for @math for some function @math . Different functions @math describe different types of singularities, allowing certain classification results. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_10"
],
"mid": [
"",
"1971796978",
"2027712695"
],
"abstract": [
"",
"One of the most striking features of the Continuous Shearlet Transform is its ability to precisely characterize the set of singularities of multivariable functions through its decay at fine scales. In dimension n=2, it was previously shown that the continuous shearlet transform provides a precise geometrical characterization for the boundary curves of very general planar regions, and this property sets the groundwork for several successful image processing applications. The generalization of this result to dimension n=3 is highly nontrivial, and so far it was known only for the special case of 3D bounded regions where the boundary set is a smooth 2-dimensional manifold with everywhere positive Gaussian curvature. In this paper, we extend this result to the general case of 3D bounded regions with piecewise-smooth boundaries, and show that also in this general situation the continuous shearlet transform precisely characterizes the geometry of the boundary set.",
"This paper shows that the continuous shearlet transform, a novel directional multiscale transform recently introduced by the authors and their collaborators, provides a precise geometrical characterization for the boundary curves of very general planar regions. This study is motivated by imaging applications, where such boundary curves represent edges of images. The shearlet approach is able to characterize both locations and orientations of the edge points, including corner points and junctions, where the edge curves exhibit abrupt changes in tangent or curvature. Our results encompass and greatly extend previous results based on the shearlet and curvelet transforms which were limited to very special cases such as polygons and smooth boundary curves with nonvanishing curvature."
]
} |
1411.5657 | 2133131808 | We analyze the detection and classification of singularities of functions @math , where @math and @math . It will be shown how the set @math can be extracted by a continuous shearlet transform associated with compactly supported shearlets. Furthermore, if @math is a @math dimensional piecewise smooth manifold with @math or @math , we will classify smooth and non-smooth components of @math . This improves previous results given for shearlet systems with a certain band-limited generator, since the estimates we derive are uniform. Moreover, we will show that our bounds are optimal. Along the way, we also obtain novel results on the characterization of wavefront sets in @math dimensions by compactly supported shearlets. Finally, geometric properties of @math such as curvature are described in terms of the continuous shearlet transform of @math . | In the three publications @cite_10 @cite_19 @cite_9 , which analyze this situation, a certain band-limited shearlet generator is used to obtain different orders of decay of the shearlet transform for different types of singularities in 2D and 3D. However, singularities are a very local concept in the spatial domain. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_10"
],
"mid": [
"",
"1971796978",
"2027712695"
],
"abstract": [
"",
"One of the most striking features of the Continuous Shearlet Transform is its ability to precisely characterize the set of singularities of multivariable functions through its decay at fine scales. In dimension n=2, it was previously shown that the continuous shearlet transform provides a precise geometrical characterization for the boundary curves of very general planar regions, and this property sets the groundwork for several successful image processing applications. The generalization of this result to dimension n=3 is highly nontrivial, and so far it was known only for the special case of 3D bounded regions where the boundary set is a smooth 2-dimensional manifold with everywhere positive Gaussian curvature. In this paper, we extend this result to the general case of 3D bounded regions with piecewise-smooth boundaries, and show that also in this general situation the continuous shearlet transform precisely characterizes the geometry of the boundary set.",
"This paper shows that the continuous shearlet transform, a novel directional multiscale transform recently introduced by the authors and their collaborators, provides a precise geometrical characterization for the boundary curves of very general planar regions. This study is motivated by imaging applications, where such boundary curves represent edges of images. The shearlet approach is able to characterize both locations and orientations of the edge points, including corner points and junctions, where the edge curves exhibit abrupt changes in tangent or curvature. Our results encompass and greatly extend previous results based on the shearlet and curvelet transforms which were limited to very special cases such as polygons and smooth boundary curves with nonvanishing curvature."
]
} |
1411.5657 | 2133131808 | We analyze the detection and classification of singularities of functions @math , where @math and @math . It will be shown how the set @math can be extracted by a continuous shearlet transform associated with compactly supported shearlets. Furthermore, if @math is a @math dimensional piecewise smooth manifold with @math or @math , we will classify smooth and non-smooth components of @math . This improves previous results given for shearlet systems with a certain band-limited generator, since the estimates we derive are uniform. Moreover, we will show that our bounds are optimal. Along the way, we also obtain novel results on the characterization of wavefront sets in @math dimensions by compactly supported shearlets. Finally, geometric properties of @math such as curvature are described in terms of the continuous shearlet transform of @math . | Hence it is intuitively evident, that the shearlet elements based on which the shearlet transform is defined should also be highly localized in spatial domain, in order to capture such a local phenomenon. The shearlet elements in @cite_10 @cite_19 @cite_9 are very well localized in spatial domain, but due to their band-limitedness they are globally supported. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_10"
],
"mid": [
"",
"1971796978",
"2027712695"
],
"abstract": [
"",
"One of the most striking features of the Continuous Shearlet Transform is its ability to precisely characterize the set of singularities of multivariable functions through its decay at fine scales. In dimension n=2, it was previously shown that the continuous shearlet transform provides a precise geometrical characterization for the boundary curves of very general planar regions, and this property sets the groundwork for several successful image processing applications. The generalization of this result to dimension n=3 is highly nontrivial, and so far it was known only for the special case of 3D bounded regions where the boundary set is a smooth 2-dimensional manifold with everywhere positive Gaussian curvature. In this paper, we extend this result to the general case of 3D bounded regions with piecewise-smooth boundaries, and show that also in this general situation the continuous shearlet transform precisely characterizes the geometry of the boundary set.",
"This paper shows that the continuous shearlet transform, a novel directional multiscale transform recently introduced by the authors and their collaborators, provides a precise geometrical characterization for the boundary curves of very general planar regions. This study is motivated by imaging applications, where such boundary curves represent edges of images. The shearlet approach is able to characterize both locations and orientations of the edge points, including corner points and junctions, where the edge curves exhibit abrupt changes in tangent or curvature. Our results encompass and greatly extend previous results based on the shearlet and curvelet transforms which were limited to very special cases such as polygons and smooth boundary curves with nonvanishing curvature."
]
} |
1411.5657 | 2133131808 | We analyze the detection and classification of singularities of functions @math , where @math and @math . It will be shown how the set @math can be extracted by a continuous shearlet transform associated with compactly supported shearlets. Furthermore, if @math is a @math dimensional piecewise smooth manifold with @math or @math , we will classify smooth and non-smooth components of @math . This improves previous results given for shearlet systems with a certain band-limited generator, since the estimates we derive are uniform. Moreover, we will show that our bounds are optimal. Along the way, we also obtain novel results on the characterization of wavefront sets in @math dimensions by compactly supported shearlets. Finally, geometric properties of @math such as curvature are described in terms of the continuous shearlet transform of @math . | In @cite_6 , compactly supported shearlets have been introduced, which in fact provide an even more localized system in the spatial domain. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2072476274"
],
"abstract": [
"Shearlet tight frames have been extensively studied in recent years due to their optimal approximation properties of cartoon-like images and their unified treatment of the continuum and digital settings. However, these studies only concerned shearlet tight frames generated by a band-limited shearlet, whereas for practical purposes compact support in spatial domain is crucial."
]
} |
1411.4738 | 2951157285 | The cross-media retrieval problem has received much attention in recent years due to the rapid increasing of multimedia data on the Internet. A new approach to the problem has been raised which intends to match features of different modalities directly. In this research, there are two critical issues: how to get rid of the heterogeneity between different modalities and how to match the cross-modal features of different dimensions. Recently metric learning methods show a good capability in learning a distance metric to explore the relationship between data points. However, the traditional metric learning algorithms only focus on single-modal features, which suffer difficulties in addressing the cross-modal features of different dimensions. In this paper, we propose a cross-modal similarity learning algorithm for the cross-modal feature matching. The proposed method takes a bilinear formulation, and with the nuclear-norm penalization, it achieves low-rank representation. Accordingly, the accelerated proximal gradient algorithm is successfully imported to find the optimal solution with a fast convergence rate O(1 t^2). Experiments on three well known image-text cross-media retrieval databases show that the proposed method achieves the best performance compared to the state-of-the-art algorithms. | Among these cross-modal matching algorithms, the CCA algorithm is the most widely used method in the multimedia field @cite_37 @cite_20 @cite_7 @cite_8 . The target of CCA is to learn a latent space by maximizing the correlating relationships between two modality features. Thus the different modal features can be projected to the latent space for similarity computation. The algorithm is also used as the correlation matching (CM) method by @cite_7 . Beyond CCA, the PLS algorithm is another classical method for cross-modal data @cite_28 @cite_25 @cite_9 . The core idea of it is very similar with that of CCA, which is to extract the latent vectors with maximal correlations. | {
"cite_N": [
"@cite_37",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_25",
"@cite_20"
],
"mid": [
"2100235303",
"2106277773",
"2134033199",
"2137225583",
"2071207147",
"2030899956",
"2163740729"
],
"abstract": [
"We present a general method using kernel canonical correlation analysis to learn a semantic representation to web images and their associated text. The semantic space provides a common representation and enables a comparison between the text and images. In the experiments, we look at two approaches of retrieving images based on only their content from a text query. We compare orthogonalization approaches against a standard cross-representation retrieval technique known as the generalized vector space model.",
"The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task.",
"In previous works of face recognition, similarity between faces is measured by comparing corresponding face regions. That is to say, matching eyes with eyes and mouths with mouths etc‥ In this paper, we propose that face can be also recognized by matching non-corresponding facial regions. In another word face can be recognized by matching eyes with mouths, for example. Specifically, the problem we study in this paper can be formulated as how to measure the possibility whether two non-corresponding face regions belong to the same face. We propose that the possibility can be measured via canonical correlation analysis. Experimental results show that it is feasible to recognize face via non-corresponding region matching. The proposed method provides an alternative and more flexible way to recognize faces.",
"Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. Projections of the observed data to its latent structure by means of PLS was developed by Herman Wold and coworkers [48,49,52].",
"This paper presents a general multi-view feature extraction approach that we call Generalized Multiview Analysis or GMA. GMA has all the desirable properties required for cross-view classification and retrieval: it is supervised, it allows generalization to unseen classes, it is multi-view and kernelizable, it affords an efficient eigenvalue based solution and is applicable to any domain. GMA exploits the fact that most popular supervised and unsupervised feature extraction techniques are the solution of a special form of a quadratic constrained quadratic program (QCQP), which can be solved efficiently as a generalized eigenvalue problem. GMA solves a joint, relaxed QCQP over different feature spaces to obtain a single (non)linear subspace. Intuitively, GMA is a supervised extension of Canonical Correlational Analysis (CCA), which is useful for cross-view classification and retrieval. The proposed approach is general and has the potential to replace CCA whenever classification or retrieval is the purpose and label information is available. We outperform previous approaches for textimage retrieval on Pascal and Wiki text-image data. We report state-of-the-art results for pose and lighting invariant face recognition on the MultiPIE face dataset, significantly outperforming other approaches.",
"This paper presents a novel way to perform multi-modal face recognition. We use Partial Least Squares (PLS) to linearly map images in different modalities to a common linear subspace in which they are highly correlated. PLS has been previously used effectively for feature selection in face recognition. We show both theoretically and experimentally that PLS can be used effectively across modalities. We also formulate a generic intermediate subspace comparison framework for multi-modal recognition. Surprisingly, we achieve high performance using only pixel intensities as features. We experimentally demonstrate the highest published recognition rates on the pose variations in the PIE data set, and also show that PLS can be used to compare sketches to photos, and to compare images taken at different resolutions.",
"We introduce a method for image retrieval that leverages the implicit information about object importance conveyed by the list of keyword tags a person supplies for an image. We propose an unsupervised learning procedure based on Kernel Canonical Correlation Analysis that discovers the relationship between how humans tag images (e.g., the order in which words are mentioned) and the relative importance of objects and their layout in the scene. Using this discovered connection, we show how to boost accuracy for novel queries, such that the search results may more closely match the user’s mental image of the scene being sought. We evaluate our approach on two datasets, and show clear improvements over both an approach relying on image features alone, as well as a baseline that uses words and image features, but ignores the implied importance cues."
]
} |
1411.4738 | 2951157285 | The cross-media retrieval problem has received much attention in recent years due to the rapid increasing of multimedia data on the Internet. A new approach to the problem has been raised which intends to match features of different modalities directly. In this research, there are two critical issues: how to get rid of the heterogeneity between different modalities and how to match the cross-modal features of different dimensions. Recently metric learning methods show a good capability in learning a distance metric to explore the relationship between data points. However, the traditional metric learning algorithms only focus on single-modal features, which suffer difficulties in addressing the cross-modal features of different dimensions. In this paper, we propose a cross-modal similarity learning algorithm for the cross-modal feature matching. The proposed method takes a bilinear formulation, and with the nuclear-norm penalization, it achieves low-rank representation. Accordingly, the accelerated proximal gradient algorithm is successfully imported to find the optimal solution with a fast convergence rate O(1 t^2). Experiments on three well known image-text cross-media retrieval databases show that the proposed method achieves the best performance compared to the state-of-the-art algorithms. | In @cite_7 where the cross-modal IR was suggested, proposed a supervised algorithm for the image-text cross-modal retrieval problem, namely the SCM algorithm. The SCM is one of the most famous and the current state-of-the-art algorithm. To reduce the semantic gap between images and documents, the sematic level matching is developed based on the learned maximal correlation latent space by CCA. Thus, the algorithm can be separated into two steps. The correlational matching between different modalities by CCA is done in the first step. Then, based on it a semantic space is learned in the second step. As indicated in @cite_7 @cite_26 , the class information is an important information to reduce the semantic gap. Thus, in order to use the class labels, proposed a GMA algorithm to learn a discriminative latent space for cross-modal data and treat it as an eigenvalue problem. The algorithm shows great performance to the pose and lighting invariant face recognition and cross-modal retrieval problems. | {
"cite_N": [
"@cite_26",
"@cite_7"
],
"mid": [
"2138118304",
"2106277773"
],
"abstract": [
"The problem of cross-modal retrieval from multimedia repositories is considered. This problem addresses the design of retrieval systems that support queries across content modalities, for example, using an image to search for texts. A mathematical formulation is proposed, equating the design of cross-modal retrieval systems to that of isomorphic feature spaces for different content modalities. Two hypotheses are then investigated regarding the fundamental attributes of these spaces. The first is that low-level cross-modal correlations should be accounted for. The second is that the space should enable semantic abstraction. Three new solutions to the cross-modal retrieval problem are then derived from these hypotheses: correlation matching (CM), an unsupervised method which models cross-modal correlations, semantic matching (SM), a supervised technique that relies on semantic representation, and semantic correlation matching (SCM), which combines both. An extensive evaluation of retrieval performance is conducted to test the validity of the hypotheses. All approaches are shown successful for text retrieval in response to image queries and vice versa. It is concluded that both hypotheses hold, in a complementary form, although evidence in favor of the abstraction hypothesis is stronger than that for correlation.",
"The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task."
]
} |
1411.4738 | 2951157285 | The cross-media retrieval problem has received much attention in recent years due to the rapid increasing of multimedia data on the Internet. A new approach to the problem has been raised which intends to match features of different modalities directly. In this research, there are two critical issues: how to get rid of the heterogeneity between different modalities and how to match the cross-modal features of different dimensions. Recently metric learning methods show a good capability in learning a distance metric to explore the relationship between data points. However, the traditional metric learning algorithms only focus on single-modal features, which suffer difficulties in addressing the cross-modal features of different dimensions. In this paper, we propose a cross-modal similarity learning algorithm for the cross-modal feature matching. The proposed method takes a bilinear formulation, and with the nuclear-norm penalization, it achieves low-rank representation. Accordingly, the accelerated proximal gradient algorithm is successfully imported to find the optimal solution with a fast convergence rate O(1 t^2). Experiments on three well known image-text cross-media retrieval databases show that the proposed method achieves the best performance compared to the state-of-the-art algorithms. | In fact, some methods targeting to the heterogenous face recognition problem are available to deal with the cross-modality IR problem, such as the Multiview Discriminant Analysis (MvDA) method @cite_6 . The MvDA algorithm aims at learning a common space where the between-class variations from both inter-view and intra-view are maximized, and the within-class variations from both inter-view and intra-view are minimized. In the transfer learning field, some algorithms are also related @cite_38 @cite_42 @cite_11 @cite_29 . Such as in @cite_11 , Lampert and Kr " o mer proposed a weakly-paired maximum covariance analysis method to deal with the not fully paired (not one-by-one paired) training data. Besides, @cite_21 proposed an iterative algorithm based on sparsity to learn the coupled feature spaces for the different modalities. The work in @cite_12 also proposed a greedy dictionary construction approach to select dictionary atoms for constructing a modality-adaptive dictionary pair. In the deep learning field, the works @cite_34 and @cite_5 both used the restricted boltzmann machine for the cross-modal feature learning. | {
"cite_N": [
"@cite_38",
"@cite_29",
"@cite_42",
"@cite_21",
"@cite_6",
"@cite_5",
"@cite_34",
"@cite_12",
"@cite_11"
],
"mid": [
"2953265577",
"2950276680",
"2090923791",
"",
"",
"154472438",
"2184188583",
"",
"1501486674"
],
"abstract": [
"We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods.",
"This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.",
"In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks.",
"",
"",
"Data often consists of multiple diverse modalities. For example, images are tagged with textual information and videos are accompanied by audio. Each modality is characterized by having distinct statistical properties. We propose a Deep Boltzmann Machine for learning a generative model of such multimodal data. We show that the model can be used to create fused representations by combining features across modalities. These learned representations are useful for classification and information retrieval. By sampling from the conditional distributions over each data modality, it is possible to create these representations even when some data modalities are missing. We conduct experiments on bimodal image-text and audio-video data. The fused representation achieves good classification results on the MIR-Flickr data set matching or outperforming other deep models as well as SVM based models that use Multiple Kernel Learning. We further demonstrate that this multimodal model helps classification and retrieval even when only unimodal data is available at test time.",
"Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.",
"",
"We study the problem of multimodal dimensionality reduction assuming that data samples can be missing at training time, and not all data modalities may be present at application time. Maximum covariance analysis, as a generalization of PCA, has many desirable properties, but its application to practical problems is limited by its need for perfectly paired data. We overcome this limitation by a latent variable approach that allows working with weakly paired data and is still able to efficiently process large datasets using standard numerical routines. The resulting weakly paired maximum covariance analysis often finds better representations than alternative methods, as we show in two exemplary tasks: texture discrimination and transfer learning."
]
} |
1411.4738 | 2951157285 | The cross-media retrieval problem has received much attention in recent years due to the rapid increasing of multimedia data on the Internet. A new approach to the problem has been raised which intends to match features of different modalities directly. In this research, there are two critical issues: how to get rid of the heterogeneity between different modalities and how to match the cross-modal features of different dimensions. Recently metric learning methods show a good capability in learning a distance metric to explore the relationship between data points. However, the traditional metric learning algorithms only focus on single-modal features, which suffer difficulties in addressing the cross-modal features of different dimensions. In this paper, we propose a cross-modal similarity learning algorithm for the cross-modal feature matching. The proposed method takes a bilinear formulation, and with the nuclear-norm penalization, it achieves low-rank representation. Accordingly, the accelerated proximal gradient algorithm is successfully imported to find the optimal solution with a fast convergence rate O(1 t^2). Experiments on three well known image-text cross-media retrieval databases show that the proposed method achieves the best performance compared to the state-of-the-art algorithms. | In the literature of the metric learning field, @cite_15 imported an online similarity function learning for large-scale images. But the algorithm is designed for single-modality in a triplet ranking formulation. also proposed to learn an asymmetric transformation matrix for domain adaption @cite_42 . Besides, a metric learning algorithm for different modalities was also realized by @cite_24 . The objective function in their work learns the projections for the different modalities, respectively, to best separate the similar points set and the dissimilar points set. However, the pair-wise information is ignored in the algorithm. In @cite_41 and @cite_10 , the authors also proposed to learn two projections for each modality to minimize the distances of the two modalities in the target feature space. also used the semantic information in the second step based on a unified k-NN graph. However, both of the algorithms are not convex, thus the optimal solution is not guaranteed. In contrast, the proposed formulation in this paper is a strict convex problem, and the optimal solution is achieved by the accelerated proximal gradient (APG) algorithm @cite_27 @cite_22 @cite_16 . | {
"cite_N": [
"@cite_22",
"@cite_41",
"@cite_42",
"@cite_24",
"@cite_27",
"@cite_15",
"@cite_16",
"@cite_10"
],
"mid": [
"",
"2295088417",
"2090923791",
"2186056216",
"2124541940",
"2131627887",
"2100556411",
"2612133093"
],
"abstract": [
"",
"As the major component of big data, unstructured heterogeneous multimedia content such as text, image, audio, video and 3D increasing rapidly on the Internet. User demand a new type of cross-media retrieval where user can search results across various media by submitting query of any media. Since the query and the retrieved results can be of different media, how to learn a heterogeneous metric is the key challenge. Most existing metric learning algorithms only focus on a single media where all of the media objects share the same data representation. In this paper, we propose a joint graph regularized heterogeneous metric learning (JGRHML) algorithm, which integrates the structure of different media into a joint graph regularization. In JGRHML, different media are complementary to each other and optimizing them simultaneously can make the solution smoother for both media and further improve the accuracy of the final metric. Based on the heterogeneous metric, we further learn a high-level semantic metric through label propagation. JGRHML is effective to explore the semantic relationship hidden across different modalities. The experimental results on two datasets with up to five media types show the effectiveness of our proposed approach.",
"In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks.",
"Many application problems such as data visualization, document retrieval, image annotation, collaborative filtering, and machine translation can be formalized as a task that utilizes a similarity function between objects in two heterogeneous spaces. In this paper, we address the problem of automatically learning such a similarity function using labeled training data. Conventional metric learning can be viewed as learning of similarity function over one single space, while the ‘metric learning’ problem in this paper can be regarded as learning of similarity function over two different spaces. We assume that the objects in the two original spaces are linearly mapped into a new space and dot product in the new space is defined as the similarity function. The metric learning problem then becomes that of learning the two linear mapping functions from training data. We then give a general and theoretically sound solution to the learning problem. Specifically, we prove that although the learning problem is non-convex, the global optimal solution exists and one can find the optimal solution using Singular Value Decomposition (SVD). We also show that the solution is ‘generalizable’ to unobserved data and it is possible to kernelize the method. We conducted two experiments; one experiment shows that keywords and images can be visualized in the same space based on the similarity function learned with our method, and the other experiment shows that the accuracy of document retrieval can be improved with the similarity function (relevance function) learned with our method.",
"It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name \"polynomial-time interior-point methods\", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12].",
"Learning a measure of similarity between pairs of objects is a fundamental problem in machine learning. It stands in the core of classification methods like kernel machines, and is particularly useful for applications like searching for images that are similar to a given image or finding videos that are relevant to a given video. In these tasks, users look for objects that are not only visually similar but also semantically related to a given object. Unfortunately, current approaches for learning similarity do not scale to large datasets, especially when imposing metric constraints on the learned similarity. We describe OASIS, a method for learning pairwise similarity that is fast and scales linearly with the number of objects and the number of non-zero features. Scalability is achieved through online learning of a bilinear model over sparse representations using a large margin criterion and an efficient hinge loss cost. OASIS is accurate at a wide range of scales: on a standard benchmark with thousands of images, it is more precise than state-of-the-art methods, and faster by orders of magnitude. On 2.7 million images collected from the web, OASIS can be trained within 3 days on a single CPU. The non-metric similarities learned by OASIS can be transformed into metric similarities, achieving higher precisions than similarities that are learned as metrics in the first place. This suggests an approach for learning a metric from data that is larger by orders of magnitude than was handled before.",
"We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.",
"This paper proposes a new approach for Cross Modal Matching, i.e. the matching of patterns represented in di erent modalities, when pairs of same di erent data are available for training (e.g. faces of same di erent persons). In this situation, standard approaches such as Partial Least Squares (PLS) or Canonical Correlation Analysis (CCA), map the data into a common latent space that maximizes the covariance, using the information brought by positive pairs only. Our contribution is a new metric learning algorithm, which alleviates this limitation by considering both positive and negative constraints and use them effi ciently to learn a discriminative latent space. The contribution is validated on several datasets for which the proposed approach consistently outperforms PLS CCA as well as more recent discriminative approaches."
]
} |
1411.4942 | 2952674441 | Counting the frequency of small subgraphs is a fundamental technique in network analysis across various domains, most notably in bioinformatics and social networks. The special case of triangle counting has received much attention. Getting results for 4-vertex patterns is highly challenging, and there are few practical results known that can scale to massive sizes. Indeed, even a highly tuned enumeration code takes more than a day on a graph with millions of edges. Most previous work that runs for truly massive graphs employ clusters and massive parallelization. We provide a sampling algorithm that provably and accurately approximates the frequencies of all 4-vertex pattern subgraphs. Our algorithm is based on a novel technique of 3-path sampling and a special pruning scheme to decrease the variance in estimates. We provide theoretical proofs for the accuracy of our algorithm, and give formal bounds for the error and confidence of our estimates. We perform a detailed empirical study and show that our algorithm provides estimates within 1 relative error for all subpatterns (over a large class of test graphs), while being orders of magnitude faster than enumeration and other sampling based algorithms. Our algorithm takes less than a minute (on a single commodity machine) to process an Orkut social network with 300 million edges. | Motif counting for bioinformatics was arguably initiated by a seminal paper of Milo @cite_5 . This technique has been used for graph modeling @cite_9 @cite_35 , graph comparisons @cite_9 @cite_13 , and even decomposing a network @cite_22 . Refer to @cite_27 @cite_7 for more details. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_7",
"@cite_9",
"@cite_27",
"@cite_5",
"@cite_13"
],
"mid": [
"158117962",
"1994948074",
"1976722121",
"2148762636",
"",
"2153624566",
"2163965598"
],
"abstract": [
"The (asymptotic) degree distributions of the best known \"scale free\" network models are all similar and are independent of the seed graph used. Hence it has been tempting to assume that networks generated by these models are similar in general. In this paper we observe that several key topological features of such networks depend heavily on the specific model and the seed graph used. Furthermore, we show that starting with the \"right\" seed graph, the duplication model captures many topological features of publicly available PPI networks very well.",
"Can complex engineered and biological networks be coarse-grained into smaller and more understandable versions in which each node represents an entire pattern in the original network? To address this, we define coarse-graining units as connectivity patterns which can serve as the nodes of a coarse-grained network and present algorithms to detect them. We use this approach to systematically reverse-engineer electronic circuits, forming understandable high-level maps from incomprehensible transistor wiring: first, a coarse-grained version in which each node is a gate made of several transistors is established. Then the coarse-grained network is itself coarse-grained, resulting in a high-level blueprint in which each node is a circuit module made of many gates. We apply our approach also to a mammalian protein signal-transduction network, to find a simplified coarse-grained network with three main signaling channels that resemble multi-layered perceptrons made of cross-interacting MAP-kinase cascades. We find that both biological and electronic networks are self-dissimilar,'' with different network motifs at each level. The present approach may be used to simplify a variety of directed and nondirected, natural and designed networks.",
"Network motifs are statistically overrepresented sub-structures (sub-graphs) in a network, and have been recognized as ‘the simple building blocks of complex networks’. Study of biological network motifs may reveal answers to many important biological questions. The main difficulty in detecting larger network motifs in biological networks lies in the facts that the number of possible sub-graphs increases exponentially with the network or motif size (node counts, in general), and that no known polynomial-time algorithm exists in deciding if two graphs are topologically equivalent. This article discusses the biological significance of network motifs, the motivation behind solving the motif-finding problem, and strategies to solve the various aspects of this problem. A simple classification scheme is designed to analyze the strengths and weaknesses of several existing algorithms. Experimental results derived from a few comparative studies in the literature are discussed, with conclusions that lead to future research directions.",
"Motivation: Networks have been used to model many real-world phenomena to better understand the phenomena and to guide experiments in order to predict their behavior. Since incorrect models lead to incorrect predictions, it is vital to have as accurate a model as possible. As a result, new techniques and models for analyzing and modeling real-world networks have recently been introduced. Results: One example of large and complex networks involves protein--protein interaction (PPI) networks. We analyze PPI networks of yeast Saccharomyces cerevisiae and fruitfly Drosophila melanogaster using a newly introduced measure of local network structure as well as the standardly used measures of global network structure. We examine the fit of four different network models, including Erdos-Renyi, scale-free and geometric random network models, to these PPI networks with respect to the measures of local and global network structure. We demonstrate that the currently accepted scale-free model of PPI networks fails to fit the data in several respects and show that a random geometric model provides a much more accurate model of the PPI data. We hypothesize that only the noise in these networks is scale-free. Conclusions: We systematically evaluate how well-different network models fit the PPI networks. We show that the structure of PPI networks is better modeled by a geometric random graph than by a scale-free model. Supplementary information: Supplementary information is available at http: www.cs.utoronto.ca juris data data ppiGRG04",
"",
"Complex networks are studied across many fields of science. To uncover their structural design principles, we defined “network motifs,” patterns of interconnections occurring in complex networks at numbers that are significantly higher than those in randomized networks. We found such motifs in networks from biochemistry, neurobiology, ecology, and engineering. The motifs shared by ecological food webs were distinct from the motifs shared by the genetic networks of Escherichia coli and Saccharomyces cerevisiae or from those found in the World Wide Web. Similar motifs were found in networks that perform information processing, even though they describe elements as different as biomolecules within a cell and synaptic connections between neurons in Caenorhabditis elegans. Motifs may thus define universal classes of networks. This",
"The structure of networks can be characterized by the frequency of different subnetwork patterns found within them. Where these frequencies deviate from what would be expected in random networks they are termed “motifs” of the network. Interestingly it is often found that networks performing similar functions evidence similar motif frequencies. We present results from a motif analysis of networks produced by peer-to-peer protocols that support cooperation between evolving nodes. We were surprised to find that their motif profiles match closely protein structure networks. It is currently an open issue as to precisely why this is."
]
} |
1411.4942 | 2952674441 | Counting the frequency of small subgraphs is a fundamental technique in network analysis across various domains, most notably in bioinformatics and social networks. The special case of triangle counting has received much attention. Getting results for 4-vertex patterns is highly challenging, and there are few practical results known that can scale to massive sizes. Indeed, even a highly tuned enumeration code takes more than a day on a graph with millions of edges. Most previous work that runs for truly massive graphs employ clusters and massive parallelization. We provide a sampling algorithm that provably and accurately approximates the frequencies of all 4-vertex pattern subgraphs. Our algorithm is based on a novel technique of 3-path sampling and a special pruning scheme to decrease the variance in estimates. We provide theoretical proofs for the accuracy of our algorithm, and give formal bounds for the error and confidence of our estimates. We perform a detailed empirical study and show that our algorithm provides estimates within 1 relative error for all subpatterns (over a large class of test graphs), while being orders of magnitude faster than enumeration and other sampling based algorithms. Our algorithm takes less than a minute (on a single commodity machine) to process an Orkut social network with 300 million edges. | Triangle counting has a rich history in social sciences and related analyses, that we simply refer the reader to the related work sections of @cite_10 @cite_24 . The significance of 4-vertex patterns was studied in recent work of @cite_28 , who propose a coordinate system'' for graphs based on the motifs distribution. This is used for improved network classification, and the input graphs were comparatively small (thousands of vertices). | {
"cite_N": [
"@cite_24",
"@cite_28",
"@cite_10"
],
"mid": [
"2291824638",
"2951881330",
""
],
"abstract": [
"Graphs and networks are used to model interactions in a variety of contexts, and there is a growing need to be able to quickly assess the qualities of a graph in order to understand its underlying structure. Some of the most useful metrics are triangle based and give a measure of the connectedness of “friends of friends.” Counting the number of triangles in a graph has, therefore, received considerable attention in recent years. We propose new sampling-based methods for counting the number of triangles or the number of triangles with vertices of specified degree in an undirected graph and for counting the number of each type of directed triangle in a directed graph. The number of samples depends only on the desired relative accuracy and not on the size of the graph. We present extensive numerical results showing that our methods are often much better than the error bounds would suggest. In the undirected case, our method is generally superior to other approximation approaches; in the undirected case, ours is the first approximation method proposed.",
"A growing set of on-line applications are generating data that can be viewed as very large collections of small, dense social graphs -- these range from sets of social groups, events, or collaboration projects to the vast collection of graph neighborhoods in large social networks. A natural question is how to usefully define a domain-independent coordinate system for such a collection of graphs, so that the set of possible structures can be compactly represented and understood within a common space. In this work, we draw on the theory of graph homomorphisms to formulate and analyze such a representation, based on computing the frequencies of small induced subgraphs within each graph. We find that the space of subgraph frequencies is governed both by its combinatorial properties, based on extremal results that constrain all graphs, as well as by its empirical properties, manifested in the way that real social graphs appear to lie near a simple one-dimensional curve through this space. We develop flexible frameworks for studying each of these aspects. For capturing empirical properties, we characterize a simple stochastic generative model, a single-parameter extension of Erdos-Renyi random graphs, whose stationary distribution over subgraphs closely tracks the concentration of the real social graph families. For the extremal properties, we develop a tractable linear program for bounding the feasible space of subgraph frequencies by harnessing a toolkit of known extremal graph theory. Together, these two complementary frameworks shed light on a fundamental question pertaining to social graphs: what properties of social graphs are 'social' properties and what properties are 'graph' properties? We conclude with a brief demonstration of how the coordinate system we examine can also be used to perform classification tasks, distinguishing between social graphs of different origins.",
""
]
} |
1411.4942 | 2952674441 | Counting the frequency of small subgraphs is a fundamental technique in network analysis across various domains, most notably in bioinformatics and social networks. The special case of triangle counting has received much attention. Getting results for 4-vertex patterns is highly challenging, and there are few practical results known that can scale to massive sizes. Indeed, even a highly tuned enumeration code takes more than a day on a graph with millions of edges. Most previous work that runs for truly massive graphs employ clusters and massive parallelization. We provide a sampling algorithm that provably and accurately approximates the frequencies of all 4-vertex pattern subgraphs. Our algorithm is based on a novel technique of 3-path sampling and a special pruning scheme to decrease the variance in estimates. We provide theoretical proofs for the accuracy of our algorithm, and give formal bounds for the error and confidence of our estimates. We perform a detailed empirical study and show that our algorithm provides estimates within 1 relative error for all subpatterns (over a large class of test graphs), while being orders of magnitude faster than enumeration and other sampling based algorithms. Our algorithm takes less than a minute (on a single commodity machine) to process an Orkut social network with 300 million edges. | Most relevant to this work are previous studies on @cite_3 @cite_24 @cite_39 . This method samples paths of length 2 to estimate various triangle statistics. Our method of 3-path sampling can be seen as building on wedge sampling. We employ new path pruning techniques to improve the algorithm's efficiency. These pruning techniques are inspired by degeneracy ordering algorithms for triangle counting @cite_15 @cite_11 . We can actually provide mathematical error bars for real runs and instances (as opposed to just a theoretical proof of convergence of estimate). | {
"cite_N": [
"@cite_3",
"@cite_39",
"@cite_24",
"@cite_15",
"@cite_11"
],
"mid": [
"2116759966",
"1968414620",
"2291824638",
"2055245094",
""
],
"abstract": [
"Since its introduction in the year 1998 by Watts and Strogatz, the clustering coecient has become a frequently used tool for analyzing graphs. In 2002 the transitivity was proposed by Newman, Watts and Strogatz as an alternative to the clustering coecient. As many networks considered in complex systems are huge, the ecient computation of such network parameters is crucial. Several algorithms with polynomial running time can be derived from results known in graph theory. The main contribution of this work is a new fast approximation algorithm for the weighted clustering coecient which also gives very ecient approximation algorithms for the clustering coecient and the transitivity. We namely present an algorithm with running time in O(1) for the clustering coecient, respectively with running time in O(n) for the transitivity. By an experimental study we demonstrate the performance of the proposed algorithms on real-world data as well as on generated graphs. Moreover we give a simple graph generator algorithm that works according to the preferential attachment rule but also generates graphs with adjustable clustering coecient.",
"Graphs and networks are used to model interactions in a variety of contexts. There is a growing need to quickly assess the characteristics of a graph in order to understand its underlying structure. Some of the most useful metrics are triangle-based and give a measure of the connectedness of mutual friends. This is often summarized in terms of clustering coefficients, which measure the likelihood that two neighbors of a node are themselves connected. Computing these measures exactly for large-scale networks is prohibitively expensive in both memory and time. However, a recent wedge-sampling algorithm has proved successful in efficiently and accurately estimating clustering coefficients. In this paper, we describe how to implement this approach in MapReduce to deal with massive graphs. We show results on publicly available networks, the largest of which is 132M nodes and 4.7B edges, as well as artificially generated networks (using the Graph500 benchmark), the largest of which has 240M nodes and 8.5B edges...",
"Graphs and networks are used to model interactions in a variety of contexts, and there is a growing need to be able to quickly assess the qualities of a graph in order to understand its underlying structure. Some of the most useful metrics are triangle based and give a measure of the connectedness of “friends of friends.” Counting the number of triangles in a graph has, therefore, received considerable attention in recent years. We propose new sampling-based methods for counting the number of triangles or the number of triangles with vertices of specified degree in an undirected graph and for counting the number of each type of directed triangle in a directed graph. The number of samples depends only on the desired relative accuracy and not on the size of the graph. We present extensive numerical results showing that our methods are often much better than the error bounds would suggest. In the undirected case, our method is generally superior to other approximation approaches; in the undirected case, ours is the first approximation method proposed.",
"In this paper we introduce a new simple strategy into edge-searching of a graph, which is useful to the various subgraph listing problems. Applying the strategy, we obtain the following four algorithms. The first one lists all the triangles in a graph G in @math time, where m is the number of edges of G and @math the arboricity of G. The second finds all the quadrangles in @math time. Since @math is at most three for a planar graph G, both run in linear time for a planar graph. The third lists all the complete subgraphs @math of order l in @math time. The fourth lists all the cliques in @math time per clique. All the algorithms require linear space. We also establish an upper bound on @math for a graph @math , where n is the number of vertices in G.",
""
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.