aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1407.1808 | 2950612966 | We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top- down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work. | There has also been work on localizing detections better using segmentation. Parkhi al use color models from predefined rectangles on cat and dog faces to do GrabCut and improve the predicted bounding box @cite_21 . Dai and Hoiem generalize this to all categories and use instance and category appearance models to improve detection @cite_4 . These approaches do well when the objects are coherent in color or texture. This is not true of many categories such as people, where each object can be made of multiple regions of different appearance. An alternative to doing segmentation is to use segmentation to generate object proposals which are then classified. The proposals may be used as just bounding boxes @cite_14 or as region proposals @cite_30 @cite_7 . These proposals incorporate both the consistency of appearance in an object as well as the possibility of having multiple disparate regions for each object. State-of-the-art detection systems @cite_23 and segmentation systems @cite_15 are now based on these methods. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_21",
"@cite_23",
"@cite_15"
],
"mid": [
"2017691720",
"1577168949",
"2056933870",
"1991367009",
"2039507552",
"2102605133",
"78159342"
],
"abstract": [
"We present a novel framework for generating and ranking plausible objects hypotheses in an image using bottom-up processes and mid-level cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge about properties of individual object classes, by solving a sequence of constrained parametric min-cut problems (CPMC) on a regular image grid. We then learn to rank the object hypotheses by training a continuous model to predict how plausible the segments are, given their mid-level region properties. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC09 segmentation dataset. It achieves the same average best segmentation covering as the best performing technique to date [2], 0.61 when using just the top 7 ranked segments, instead of the full hierarchy in [2]. Our method achieves 0.78 average best covering using 154 segments. In a companion paper [18], we also show that the algorithm achieves state-of-the art results when used in a segmentation-based recognition pipeline.",
"",
"In this paper, we propose an approach to accurately localize detected objects. The goal is to predict which features pertain to the object and define the object extent with segmentation or bounding box. Our initial detector is a slight modification of the DPM detector by , which often reduces confusion with background and other objects but does not cover the full object. We then describe and evaluate several color models and edge cues for local predictions, and we propose two approaches for localization: learned graph cut segmentation and structural bounding box prediction. Our experiments on the PASCAL VOC 2010 dataset show that our approach leads to accurate pixel assignment and large improvement in bounding box overlap, sometimes leading to large overall improvement in detection accuracy.",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"Template-based object detectors such as the deformable parts model of [11] achieve state-of-the-art performance for a variety of object categories, but are still outperformed by simpler bag-of-words models for highly flexible objects such as cats and dogs. In these cases we propose to use the template-based model to detect a distinctive part for the class, followed by detecting the rest of the object via segmentation on image specific information learnt from that part. This approach is motivated by two ob- servations: (i) many object classes contain distinctive parts that can be detected very reliably by template-based detec- tors, whilst the entire object cannot; (ii) many classes (e.g. animals) have fairly homogeneous coloring and texture that can be used to segment the object once a sample is provided in an image. We show quantitatively that our method substantially outperforms whole-body template-based detectors for these highly deformable object categories, and indeed achieves accuracy comparable to the state-of-the-art on the PASCAL VOC competition, which includes other models such as bag-of-words.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and max-pooling that together with appropriate non-linearities lead to state-of-the-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster."
]
} |
1407.1808 | 2950612966 | We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top- down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work. | In many of these approaches, segmentation is used only to localize the detections better. Other authors have explored using segmentation as a stronger cue. Fidler al @cite_2 use the output of a state-of-the-art semantic segmentation approach @cite_15 to score detections better. Mottaghi @cite_19 uses detectors based on non-rectangular patches to both detect and segment objects. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_2"
],
"mid": [
"2112357820",
"78159342",
""
],
"abstract": [
"The performance of part-based object detectors generally degrades for highly flexible objects. The limited topological structure of models and pre-specified part shapes are two main factors preventing these detectors from fully capturing large deformations. To better capture the deformations, we propose a novel approach to integrate the detections from a family of part-based detectors with patches of objects that have irregular shape. This integration is formulated as MAP inference in a Conditional Random Field (CRF). The energy function defined over the CRF takes into account the information provided by an object patch classifier and the object detector, and the goal is to augment the partial detections with missing patches, and also to refine the detections that include background clutter. The proposed method is evaluated on the object detection task of PASCAL VOC. Our experimental results show significant improvement over a base part-based detector (which is among the current state-of-the-art methods) especially for the deformable object classes.",
"Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and max-pooling that together with appropriate non-linearities lead to state-of-the-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster.",
""
]
} |
1407.1808 | 2950612966 | We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top- down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work. | The approaches above were typically built on features such as SIFT @cite_24 or HOG @cite_25 . Recently the computer vision community has shifted towards using convolutional neural networks (CNNs). CNNs have their roots in the Neocognitron proposed by Fukushima @cite_12 . Trained with the back-propagation algorithm, LeCun @cite_29 showed that they could be used for handwritten zip code recognition. They have since been used in a variety of tasks, including detection @cite_18 @cite_11 and semantic segmentation @cite_31 . Krizhevsky al @cite_28 showed a large increase in performance by using CNNs for classification in the ILSVRC challenge @cite_0 . Donahue al @cite_27 showed that Krizhevsky's architecture could be used as a generic feature extractor that did well across a wide variety of tasks. Girshick al @cite_23 build on this and finetune Krizhevsky's architecture for detection to nearly double the state-of-the-art performance. They use a simple pipeline, using CNNs to classify bounding box proposals from @cite_14 . Our algorithm builds on this system, and on high quality region proposals from @cite_7 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_28",
"@cite_29",
"@cite_24",
"@cite_0",
"@cite_27",
"@cite_23",
"@cite_31",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"2949966521",
"1577168949",
"1991367009",
"",
"2147800946",
"2151103935",
"",
"2953360861",
"2102605133",
"2022508996",
"2161969291",
"2101926813",
"1487583988"
],
"abstract": [
"Pedestrian detection is a problem of considerable practical interest. Adding to the list of successful applications of deep learning methods to vision, we report state-of-the-art and competitive results on all major pedestrian datasets with a convolutional network model. The model uses a few new twists, such as multi-stage features, connections that skip layers to integrate global shape information with local distinctive motif information, and an unsupervised method based on convolutional sparse coding to pre-train the filters at each stage.",
"",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"",
"The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"",
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by “learning without a teacher”, and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname “neocognitron”. After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel. The network consits of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade. The first layer of each module consists of “S-cells”, which show characteristics similar to simple cells or lower order hypercomplex cells, and the second layer consists of “C-cells” similar to complex cells or higher order hypercomplex cells. The afferent synapses to each S-cell have plasticity and are modifiable. The network has an ability of unsupervised learning: We do not need any “teacher” during the process of self-organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. The network has been simulated on a digital computer. After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cell of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern. That is, none of the C-cells of the last layer responds to more than one stimulus pattern. The response of the C-cells of the last layer is not affected by the pattern's position at all. Neither is it affected by a small change in shape nor in size of the stimulus pattern.",
"We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat."
]
} |
1407.1120 | 2949238035 | Representing images and videos with Symmetric Positive Definite (SPD) matrices and considering the Riemannian geometry of the resulting space has proven beneficial for many recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices --especially of high-dimensional ones-- comes at a high cost that limits the applicability of existing techniques. In this paper we introduce an approach that lets us handle high-dimensional SPD matrices by constructing a lower-dimensional, more discriminative SPD manifold. To this end, we model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. In particular, we search for a projection that yields a low-dimensional manifold with maximum discriminative power encoded via an affinity-weighted similarity measure based on metrics on the manifold. Learning can then be expressed as an optimization problem on a Grassmann manifold. Our evaluation on several classification tasks shows that our approach leads to a significant accuracy gain over state-of-the-art methods. | Principal Geodesic Analysis (PGA) was introduced in @cite_1 as a generalization of Principal Component Analysis (PCA) to Riemannian manifolds. PGA identifies the tangent space whose corresponding subspace maximizes the variability of the data on the manifold. PGA, however, is equivalent to flattening the Riemannian manifold by taking its tangent space at the Karcher, or Fr ' e chet, mean of the data. As such, it does not fully exploit the structure of the manifold. Furthermore, PGA, as PCA, cannot exploit the availability of class labels, and may therefore be sub-optimal for classification. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2125949583"
],
"abstract": [
"A primary goal of statistical shape analysis is to describe the variability of a population of geometric objects. A standard technique for computing such descriptions is principal component analysis. However, principal component analysis is limited in that it only works for data lying in a Euclidean vector space. While this is certainly sufficient for geometric models that are parameterized by a set of landmarks or a dense collection of boundary points, it does not handle more complex representations of shape. We have been developing representations of geometry based on the medial axis description or m-rep. While the medial representation provides a rich language for variability in terms of bending, twisting, and widening, the medial parameters are not elements of a Euclidean vector space. They are in fact elements of a nonlinear Riemannian symmetric space. In this paper, we develop the method of principal geodesic analysis, a generalization of principal component analysis to the manifold setting. We demonstrate its use in describing the variability of medially-defined anatomical objects. Results of applying this framework on a population of hippocampi in a schizophrenia study are presented."
]
} |
1407.1120 | 2949238035 | Representing images and videos with Symmetric Positive Definite (SPD) matrices and considering the Riemannian geometry of the resulting space has proven beneficial for many recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices --especially of high-dimensional ones-- comes at a high cost that limits the applicability of existing techniques. In this paper we introduce an approach that lets us handle high-dimensional SPD matrices by constructing a lower-dimensional, more discriminative SPD manifold. To this end, we model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. In particular, we search for a projection that yields a low-dimensional manifold with maximum discriminative power encoded via an affinity-weighted similarity measure based on metrics on the manifold. Learning can then be expressed as an optimization problem on a Grassmann manifold. Our evaluation on several classification tasks shows that our approach leads to a significant accuracy gain over state-of-the-art methods. | @cite_14 , the Covariance Discriminative Learning (CDL) algorithm was proposed to embed the SPD manifold into a Euclidean space. In contrast to PGA, CDL utilizes class labels to learn a discriminative subspace using Partial Least Squares (PLS) or Linear Discriminant Analysis (LDA). However, CDL relies on mapping the SPD manifold to the space of symmetric matrices via the principal matrix logarithm. While this embedding has some nice properties ( , diffeomorphism), it can also be thought of as embedding the SPD manifold into its tangent space at the identity matrix. Therefore, although supervised, CDL also exploits data potentially distorted by the use of a single tangent space, as PGA. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2144093206"
],
"abstract": [
"We propose a novel discriminative learning approach to image set classification by modeling the image set with its natural second-order statistic, i.e. covariance matrix. Since nonsingular covariance matrices, a.k.a. symmetric positive definite (SPD) matrices, lie on a Riemannian manifold, classical learning algorithms cannot be directly utilized to classify points on the manifold. By exploring an efficient metric for the SPD matrices, i.e., Log-Euclidean Distance (LED), we derive a kernel function that explicitly maps the covariance matrix from the Riemannian manifold to a Euclidean space. With this explicit mapping, any learning method devoted to vector space can be exploited in either its linear or kernel formulation. Linear Discriminant Analysis (LDA) and Partial Least Squares (PLS) are considered in this paper for their feasibility for our specific problem. We further investigate the conventional linear subspace based set modeling technique and cast it in a unified framework with our covariance matrix based modeling. The proposed method is evaluated on two tasks: face recognition and object categorization. Extensive experimental results show not only the superiority of our method over state-of-the-art ones in both accuracy and efficiency, but also its stability to two real challenges: noisy set data and varying set size."
]
} |
1407.1120 | 2949238035 | Representing images and videos with Symmetric Positive Definite (SPD) matrices and considering the Riemannian geometry of the resulting space has proven beneficial for many recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices --especially of high-dimensional ones-- comes at a high cost that limits the applicability of existing techniques. In this paper we introduce an approach that lets us handle high-dimensional SPD matrices by constructing a lower-dimensional, more discriminative SPD manifold. To this end, we model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. In particular, we search for a projection that yields a low-dimensional manifold with maximum discriminative power encoded via an affinity-weighted similarity measure based on metrics on the manifold. Learning can then be expressed as an optimization problem on a Grassmann manifold. Our evaluation on several classification tasks shows that our approach leads to a significant accuracy gain over state-of-the-art methods. | Finally, in @cite_7 , several Nonlinear Dimensionality Reduction techniques were extended to their Riemannian counterparts. This was achieved by introducing various Riemannian geometry concepts, such as Karcher mean, tangent spaces and geodesics, in Locally Linear Embedding (LLE), Hessian LLE and Laplacian Eigenmaps. The resulting algorithms were applied to several unsupervised clustering tasks. Although these methods can, in principle, be employed for supervised classification, they are limited to the transductive setting since they do not define any parametric mapping to the low-dimensional space. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2096484739"
],
"abstract": [
"We propose a novel algorithm for clustering data sampled from multiple submanifolds of a Riemannian manifold. First, we learn a representation of the data using generalizations of local nonlinear dimensionality reduction algorithms from Euclidean to Riemannian spaces. Such generalizations exploit geometric properties of the Riemannian space, particularly its Riemannian metric. Then, assuming that the data points from different groups are separated, we show that the null space of a matrix built from the local representation gives the segmentation of the data. Our method is computationally simple and performs automatic segmentation without requiring user initialization. We present results on 2-D motion segmentation and diffusion tensor imaging segmentation."
]
} |
1407.1120 | 2949238035 | Representing images and videos with Symmetric Positive Definite (SPD) matrices and considering the Riemannian geometry of the resulting space has proven beneficial for many recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices --especially of high-dimensional ones-- comes at a high cost that limits the applicability of existing techniques. In this paper we introduce an approach that lets us handle high-dimensional SPD matrices by constructing a lower-dimensional, more discriminative SPD manifold. To this end, we model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. In particular, we search for a projection that yields a low-dimensional manifold with maximum discriminative power encoded via an affinity-weighted similarity measure based on metrics on the manifold. Learning can then be expressed as an optimization problem on a Grassmann manifold. Our evaluation on several classification tasks shows that our approach leads to a significant accuracy gain over state-of-the-art methods. | To the best of our knowledge, this is the first work that shows how a high-dimensional SPD manifold can be transformed into another SPD manifold with lower intrinsic dimension. Note that a related idea, but with a very different approach, was introduced in @cite_17 to decompose high-dimensional spheres into submanifolds of decreasing dimensionality. | {
"cite_N": [
"@cite_17"
],
"mid": [
"1966521860"
],
"abstract": [
"A general framework for a novel non-geodesic decomposition of high-dimensional spheres or high-dimensional shape spaces for planar landmarks is discussed. The decomposition, principal nested spheres, leads to a sequence of submanifolds with decreasing intrinsic dimensions, which can be interpreted as an analogue of principal component analysis. In a number of real datasets, an apparent one-dimensional mode of variation curving through more than one geodesic component is captured in the one-dimensional component of principal nested spheres. While analysis of principal nested spheres provides an intuitive and flexible decomposition of the high-dimensional sphere, an interesting special case of the analysis results in finding principal geodesics, similar to those from previous approaches to manifold principal component analysis. An adaptation of our method to Kendall's shape space is discussed, and a computational algorithm for fitting principal nested spheres is proposed. The result provides a coordinate system to visualize the data structure and an intuitive summary of principal modes of variation, as exemplified by several datasets. Copyright 2012, Oxford University Press."
]
} |
1407.1428 | 2952911535 | Two mobile agents, starting from different nodes of an @math -node network at possibly different times, have to meet at the same node. This problem is known as rendezvous. Agents move in synchronous rounds using a deterministic algorithm. In each round, an agent decides to either remain idle or to move to one of the adjacent nodes. Each agent has a distinct integer label from the set @math , which it can use in the execution of the algorithm, but it does not know the label of the other agent. If @math is the distance between the initial positions of the agents, then @math is an obvious lower bound on the time of rendezvous. However, if each agent has no initial knowledge other than its label, time @math is usually impossible to achieve. We study the minimum amount of information that has to be available a priori to the agents to achieve rendezvous in optimal time @math . This information is provided to the agents at the start by an oracle knowing the entire instance of the problem, i.e., the network, the starting positions of the agents, their wake-up rounds, and both of their labels. The oracle helps the agents by providing them with the same binary string called advice, which can be used by the agents during their navigation. The length of this string is called the size of advice. Our goal is to find the smallest size of advice which enables the agents to meet in time @math . We show that this optimal size of advice is @math . The upper bound is proved by constructing an advice string of this size, and providing a natural rendezvous algorithm using this advice that works in time @math for all networks. The matching lower bound, which is the main contribution of this paper, is proved by exhibiting classes of networks for which it is impossible to achieve rendezvous in time @math with smaller advice. | The problem of rendezvous has been studied both under randomized and deterministic scenarios. An extensive survey of randomized rendezvous in various models can be found in @cite_20 , cf. also @cite_24 @cite_6 @cite_40 @cite_29 . Deterministic rendezvous in networks has been surveyed in @cite_37 . Several authors considered geometric scenarios (rendezvous in an interval of the real line, e.g., @cite_29 @cite_21 , or in the plane, e.g., @cite_13 @cite_33 ). Gathering more than two agents was studied, e.g., in @cite_12 @cite_3 @cite_8 . | {
"cite_N": [
"@cite_37",
"@cite_33",
"@cite_8",
"@cite_29",
"@cite_21",
"@cite_6",
"@cite_3",
"@cite_24",
"@cite_40",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"1988551609",
"2014192080",
"1975690872",
"1971467559",
"2030068303",
"2071069929",
"2007267487",
"2040570464",
"2103469743",
"2096182510",
"2010017329",
"1501957312"
],
"abstract": [
"Two or more mobile entities, called agents or robots, starting at distinct initial positions, have to meet. This task is known in the literature as rendezvous. Among many alternative assumptions that have been used to study the rendezvous problem, two most significantly influence the methodology appropriate for its solution. The first of these assumptions concerns the environment in which the mobile entities navigate: it can be either a terrain in the plane, or a network modeled as an undirected graph. The second assumption concerns the way in which the entities move: it can be either deterministic or randomized. In this article, we survey results on deterministic rendezvous in networks. © 2012 Wiley Periodicals, Inc. NETWORKS, 2012 © 2012 Wiley Periodicals, Inc.",
"We consider rendezvous problems in which two players move on the plane and wish to cooperate to minimise their first meeting time. We begin by considering the case where both players are placed such that the vector difference is chosen equiprobably from a finite set. We also consider a situation in which they know they are a distanced apart, but they do not know the direction of the other player. Finally, we give some results for the case in which player 1 knows the initial position of player 2, while player 2 is given information only on the initial distance of player 1.",
"If two searchers are searching for a stationary target and wish to minimize the expected time until both searchers and the lost target are reunited, there is a trade off between searching for the target and checking back to see if the other searcher has already found the target. This note solves a non-linear optimization problem to find the optimal search strategy for this problem.",
"Two players A and B are randomly placed on a line. The distribution of the distance between them is unknown except that the expected initial distance of the (two) players does not exceed some constant @math The players can move with maximal velocity 1 and would like to meet one another as soon as possible. Most of the paper deals with the asymmetric rendezvous in which each player can use a different trajectory. We find rendezvous trajectories which are efficient against all probability distributions in the above class. (It turns out that our trajectories do not depend on the value of @math ) We also obtain the minimax trajectory of player A if player B just waits for him. This trajectory oscillates with a geometrically increasing amplitude. It guarantees an expected meeting time not exceeding @math We show that, if player B also moves, then the expected meeting time can be reduced to @math The expected meeting time can be further reduced if the players use mixed strategies. We show that if player B rests, then the optimal strategy of player A is a mixture of geometric trajectories. It guarantees an expected meeting time not exceeding @math This value can be reduced even more (below @math ) if player B also moves according to a (correlated) mixed strategy. We also obtain a bound for the expected meeting time of the corresponding symmetric rendezvous problem.",
"Leaving marks at the starting points in a rendezvous search problem may provide the players with important information. Many of the standard rendezvous search problems are investigated under this new framework which we call markstart rendezvous search. Somewhat surprisingly, the relative difficulties of analysing problems in the two scenarios differ from problem to problem. Symmetric rendezvous on the line seems to be more tractable in the new setting whereas asymmetric rendezvous on the line when the initial distance is chosen by means of a convex distribution appears easier to analyse in the original setting. Results are also obtained for markstart rendezvous on complete graphs and on the line when the players' initial distance is given by an unknown probability distribution. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 722–731, 2001",
"Two players are independently placed on a commonly labelled network X. They cannot see each other but wish to meet in least expected time. We consider continuous and discrete versions, in which they may move at unit speed or between adjacent distinct nodes, respectively. There are two versions of the problem (asymmetric or symmetric), depending on whether or not we allow the players to use different strategies. After obtaining some optimality conditions for general networks, we specialize to the interval and circle networks. In the first setting, we extend the work of J. V. Howard; in the second we prove a conjecture concerning the optimal symmetric strategy. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 256–274, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002 nav.10011",
"Suppose that @math players are placed randomly on the real line at consecutive integers, and faced in random directions. Each player has maximum speed one, cannot see the others, and doesn't know his relative position. What is the minimum time @math required to ensure that all the players can meet together at a single point, regardless of their initial placement? We prove that @math , @math , and @math is asymptotic to @math We also consider a variant of the problem which requires players who meet to stick together, and find in this case that three players require @math time units to ensure a meeting. This paper is thus a minimax version of the rendezvous search problem, which has hitherto been studied only in terms of minimizing the expected meeting time.",
"The author considers the problem faced by two people who are placed randomly in a known search region and move about at unit speed to find each other in the least expected time. This time is called the rendezvous value of the region. It is shown how symmetries in the search region may hinder the process by preventing coordination based on concepts such as north or clockwise. A general formulation of the rendezvous search problem is given for a compact metric space endowed with a group of isometrics which represents the spatial uncertainties of the players. These concepts are illustrated by considering upper bounds for various rendezvous values for the circle and an arbitrary metric network. The discrete rendezvous problem on a cycle graph for players restricted to symmetric Markovian strategies is then solved. Finally, the author considers the problem faced by two people on an infinite line who each know the distribution of the distance but not the direction to each other.",
"Two friends have become separated in a building or shopping mall and and wish to meet as quickly as possible. There are n possible locations where they might meet. However, the locations are identical and there has been no prior agreement where to meet or how to search. Hence they must use identical strategies and must treat all locations in a symmetrical fashion. Suppose their search proceeds in discrete time. Since they wish to avoid the possibility of never meeting, they will wish to use some randomizing strategy. If each person searches one of the n locations at random at each step, then rendezvous will require n steps on average. It is possible to do better than this: although the optimal strategy is difficult to characterize for general n, there is a strategy with an expected time until rendezvous of less than 0.829 n for large enough n. For n = 2 and 3 the optimal strategy can be established and on average 2 and 8 3 steps are required respectively. There are many tantalizing variations on this problem, which we discuss with some conjectures. DYNAMIC PROGRAMMING; SEARCH PROBLEMS",
"",
"In this paper we study the problem of gathering a collection of identical oblivious mobile robots in the same location of the plane. Previous investigations have focused mostly on the unlimited visibility setting, where each robot can always see all the others regardless of their distance.In the more difficult and realistic setting where the robots have limited visibility, the existing algorithmic results are only for convergence (towards a common point, without ever reaching it) and only for semi-synchronous environments, where robots' movements are assumed to be performed instantaneously.In contrast, we study this problem in a totally asynchronous setting, where robots' actions, computations, and movements require a finite but otherwise unpredictable amount of time. We present a protocol that allows anonymous oblivious robots with limited visibility to gather in the same location in finite time, provided they have orientation (i.e., agreement on a coordinate system).Our result indicates that, with respect to gathering, orientation is at least as powerful as instantaneous movements.",
"Search Theory is one of the original disciplines within the field of Operations Research. It deals with the problem faced by a Searcher who wishes to minimize the time required to find a hidden object, or “target. ” The Searcher chooses a path in the “search space” and finds the target when he is sufficiently close to it. Traditionally, the target is assumed to have no motives of its own regarding when it is found; it is simply stationary and hidden according to a known distribution (e. g. , oil), or its motion is determined stochastically by known rules (e. g. , a fox in a forest). The problems dealt with in this book assume, on the contrary, that the “target” is an independent player of equal status to the Searcher, who cares about when he is found. We consider two possible motives of the target, and divide the book accordingly. Book I considers the zero-sum game that results when the target (here called the Hider) does not want to be found. Such problems have been called Search Games (with the “ze- sum” qualifier understood). Book II considers the opposite motive of the target, namely, that he wants to be found. In this case the Searcher and the Hider can be thought of as a team of agents (simply called Player I and Player II) with identical aims, and the coordination problem they jointly face is called the Rendezvous Search Problem."
]
} |
1407.1428 | 2952911535 | Two mobile agents, starting from different nodes of an @math -node network at possibly different times, have to meet at the same node. This problem is known as rendezvous. Agents move in synchronous rounds using a deterministic algorithm. In each round, an agent decides to either remain idle or to move to one of the adjacent nodes. Each agent has a distinct integer label from the set @math , which it can use in the execution of the algorithm, but it does not know the label of the other agent. If @math is the distance between the initial positions of the agents, then @math is an obvious lower bound on the time of rendezvous. However, if each agent has no initial knowledge other than its label, time @math is usually impossible to achieve. We study the minimum amount of information that has to be available a priori to the agents to achieve rendezvous in optimal time @math . This information is provided to the agents at the start by an oracle knowing the entire instance of the problem, i.e., the network, the starting positions of the agents, their wake-up rounds, and both of their labels. The oracle helps the agents by providing them with the same binary string called advice, which can be used by the agents during their navigation. The length of this string is called the size of advice. Our goal is to find the smallest size of advice which enables the agents to meet in time @math . We show that this optimal size of advice is @math . The upper bound is proved by constructing an advice string of this size, and providing a natural rendezvous algorithm using this advice that works in time @math for all networks. The matching lower bound, which is the main contribution of this paper, is proved by exhibiting classes of networks for which it is impossible to achieve rendezvous in time @math with smaller advice. | For the deterministic setting, many authors studied the feasibility and time complexity of rendezvous. For instance, deterministic rendezvous of agents that are equipped with tokens used to mark nodes was considered, e.g., in @cite_32 . Most relevant to our work are the results about deterministic rendezvous in arbitrary graphs, when the two agents cannot mark nodes, but have unique labels @cite_22 @cite_16 @cite_43 . In @cite_22 , the authors present a rendezvous algorithm whose running time is polynomial in the size of the graph, in the length of the shorter label and in the delay between the starting times of the agents. In @cite_16 @cite_43 , rendezvous time is polynomial in the first two of these parameters and independent of the delay. | {
"cite_N": [
"@cite_43",
"@cite_22",
"@cite_16",
"@cite_32"
],
"mid": [
"2623545498",
"",
"2138881680",
"2131887891"
],
"abstract": [
"We obtain several improved solutions for the deterministic rendezvous problem in general undirected graphs. Our solutions answer several problems left open in a recent paper by We also introduce an interesting variant of the rendezvous problem which we call the deterministic treasure hunt problem. Both the rendezvous and the treasure hunt problems motivate the study of universal traversal sequences and universal exploration sequences with some strengthened properties. We call such sequences strongly universal traversal (exploration) sequences. We give an explicit construction of strongly universal exploration sequences. The existence of strongly universal traversal sequences, as well as the solution of the most difficult variant of the deterministic treasure hunt problem, are left as intriguing open problems.",
"",
"A set of k mobile agents with distinct identifiers and located in nodes of an unknown anonymous connected network, have to meet at some node. We show that this gathering problem is no harder than its special case for k=2, called the rendezvous problem, and design deterministic protocols solving the rendezvous problem with arbitrary startups in rings and in general networks. The measure of performance is the number of steps since the startup of the last agent until the rendezvous is achieved. For rings we design an oblivious protocol with cost O([email protected]?), where n is the size of the network and @? is the minimum label of participating agents. This result is asymptotically optimal due to the lower bound showed by [A. Dessmark, P. Fraigniaud, D. Kowalski, A. Pelc, Deterministic rendezvous in graphs, Algorithmica 46 (2006) 69-96]. For general networks we show a protocol with cost polynomial in n and [email protected]?, independent of the maximum difference @t of startup times, which answers in the affirmative the open question by [A. Dessmark, P. Fraigniaud, D. Kowalski, A. Pelc, Deterministic rendezvous in graphs, Algorithmica 46 (2006) 69-96].",
"In the rendezvous search problem, two mobile agents must move along the n nodes of a network so as to minimize the time required to meet or rendezvous. When the mobile agents are identical and the network is anonymous, however, the resulting symmetry can make the problem impossible to solve. Symmetry is typically broken by having the mobile agents run either a randomized algorithm or different deterministic algorithms. We investigate the use of identical tokens to break symmetry so that the two mobile agents can run the same deterministic algorithm. After deriving the explicit conditions under which identical tokens can be used to break symmetry on the n node ring, we derive the lower and upper bounds for the time and memory complexity of the rendezvous search problem with various parameter sets. While these results suggest a possible tradeoff between the mobile agents' memory and the time complexity of the rendezvous search problem, we prove that this tradeoff is limited."
]
} |
1407.1428 | 2952911535 | Two mobile agents, starting from different nodes of an @math -node network at possibly different times, have to meet at the same node. This problem is known as rendezvous. Agents move in synchronous rounds using a deterministic algorithm. In each round, an agent decides to either remain idle or to move to one of the adjacent nodes. Each agent has a distinct integer label from the set @math , which it can use in the execution of the algorithm, but it does not know the label of the other agent. If @math is the distance between the initial positions of the agents, then @math is an obvious lower bound on the time of rendezvous. However, if each agent has no initial knowledge other than its label, time @math is usually impossible to achieve. We study the minimum amount of information that has to be available a priori to the agents to achieve rendezvous in optimal time @math . This information is provided to the agents at the start by an oracle knowing the entire instance of the problem, i.e., the network, the starting positions of the agents, their wake-up rounds, and both of their labels. The oracle helps the agents by providing them with the same binary string called advice, which can be used by the agents during their navigation. The length of this string is called the size of advice. Our goal is to find the smallest size of advice which enables the agents to meet in time @math . We show that this optimal size of advice is @math . The upper bound is proved by constructing an advice string of this size, and providing a natural rendezvous algorithm using this advice that works in time @math for all networks. The matching lower bound, which is the main contribution of this paper, is proved by exhibiting classes of networks for which it is impossible to achieve rendezvous in time @math with smaller advice. | Memory required by the agents to achieve deterministic rendezvous was studied in @cite_14 for trees and in @cite_0 for general graphs. Memory needed for randomized rendezvous in the ring is discussed, e.g., in @cite_28 . | {
"cite_N": [
"@cite_0",
"@cite_28",
"@cite_14"
],
"mid": [
"1972775782",
"1535538796",
"2081055073"
],
"abstract": [
"Two identical (anonymous) mobile agents start from arbitrary nodes in an a priori unknown graph and move synchronously from node to node with the goal of meeting. This rendezvous problem has been thoroughly studied, both for anonymous and for labeled agents, along with another basic task, that of exploring graphs by mobile agents. The rendezvous problem is known to be not easier than graph exploration. A well-known recent result on exploration, due to Reingold, states that deterministic exploration of arbitrary graphs can be performed in log-space, i.e., using an agent equipped with O(log n) bits of memory, where n is the size of the graph. In this paper we study the size of memory of mobile agents that permits us to solve the rendezvous problem deterministically. Our main result establishes the minimum size of the memory of anonymous agents that guarantees deterministic rendezvous when it is feasible. We show that this minimum size is Θ(log n), where n is the size of the graph, regardless of the delay between the starting times of the agents. More precisely, we construct identical agents equipped with Θ(log n) memory bits that solve the rendezvous problem in all graphs with at most n nodes, if they start with any delay τ, and we prove a matching lower bound Ω(log n) on the number of memory bits needed to accomplish rendezvous, even for simultaneous start. In fact, this lower bound is achieved already on the class of rings. This shows a significant contrast between rendezvous and exploration: e.g., while exploration of rings (without stopping) can be done using constant memory, rendezvous, even with simultaneous start, requires logarithmic memory. As a by-product of our techniques introduced to obtain log-space rendezvous we get the first algorithm to find a quotient graph of a given unlabeled graph in polynomial time, by means of a mobile agent moving around the graph.",
"We present a tradeoff between the expected time for two identical agents to rendez-vous on a synchronous, anonymous, oriented ring and the memory requirements of the agents. In particular, we show that there exists a 2t state agent, which can achieve rendez-vous on an n node ring in expected time O(n2 2t + 2t) and that any t 2 state agent requires expected time Ω(n2 2t). As a corollary we observe that Θ(log log n) bits of memory are necessary and sufficient to achieve rendezvous in linear time.",
"The aim of rendezvous in a graph is meeting of two mobile agents at some node of an unknown anonymous connected graph. In this article, we focus on rendezvous in trees, and, analogously to the efforts that have been made for solving the exploration problem with compact automata, we study the size of memory of mobile agents that permits to solve the rendezvous problem deterministically. We assume that the agents are identical, and move in synchronous rounds. We first show that if the delay between the starting times of the agents is arbitrary, then the lower bound on memory required for rendezvous is Ω(log n) bits, even for the line of length n. This lower bound meets a previously known upper bound of O(log n) bits for rendezvous in arbitrary graphs of size at most n. Our main result is a proof that the amount of memory needed for rendezvous with simultaneous start depends essentially on the number e of leaves of the tree, and is exponentially less impacted by the number n of nodes. Indeed, we present two identical agents with O(log e + log log n) bits of memory that solve the rendezvous problem in all trees with at most n nodes and at most e leaves. Hence, for the class of trees with polylogarithmically many leaves, there is an exponential gap in minimum memory size needed for rendezvous between the scenario with arbitrary delay and the scenario with delay zero. Moreover, we show that our upper bound is optimal by proving that Ω(log e + log log n) bits of memory are required for rendezvous, even in the class of trees with degrees bounded by 3."
]
} |
1407.1428 | 2952911535 | Two mobile agents, starting from different nodes of an @math -node network at possibly different times, have to meet at the same node. This problem is known as rendezvous. Agents move in synchronous rounds using a deterministic algorithm. In each round, an agent decides to either remain idle or to move to one of the adjacent nodes. Each agent has a distinct integer label from the set @math , which it can use in the execution of the algorithm, but it does not know the label of the other agent. If @math is the distance between the initial positions of the agents, then @math is an obvious lower bound on the time of rendezvous. However, if each agent has no initial knowledge other than its label, time @math is usually impossible to achieve. We study the minimum amount of information that has to be available a priori to the agents to achieve rendezvous in optimal time @math . This information is provided to the agents at the start by an oracle knowing the entire instance of the problem, i.e., the network, the starting positions of the agents, their wake-up rounds, and both of their labels. The oracle helps the agents by providing them with the same binary string called advice, which can be used by the agents during their navigation. The length of this string is called the size of advice. Our goal is to find the smallest size of advice which enables the agents to meet in time @math . We show that this optimal size of advice is @math . The upper bound is proved by constructing an advice string of this size, and providing a natural rendezvous algorithm using this advice that works in time @math for all networks. The matching lower bound, which is the main contribution of this paper, is proved by exhibiting classes of networks for which it is impossible to achieve rendezvous in time @math with smaller advice. | Apart from the synchronous model used in this paper, several authors investigated asynchronous rendezvous in the plane @cite_42 @cite_12 and in network environments @cite_26 @cite_1 @cite_34 @cite_18 . In the latter scenario, the agent chooses the edge to traverse, but the adversary controls the speed of the agent. Under this assumption, rendezvous at a node cannot be guaranteed even in very simple graphs. Hence the rendezvous requirement is relaxed to permit the agents to meet inside an edge. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_42",
"@cite_1",
"@cite_34",
"@cite_12"
],
"mid": [
"2131588774",
"",
"2087073465",
"2100580556",
"2050063944",
"2010017329"
],
"abstract": [
"Two mobile agents starting at different nodes of an unknown network have to meet. This task is known in the literature as rendezvous. Each agent has a different label which is a positive integer known to it, but unknown to the other agent. Agents move in an asynchronous way: the speed of agents may vary and is controlled by an adversary. The cost of a rendezvous algorithm is the total number of edge traversals by both agents until their meeting. The only previous deterministic algorithm solving this problem has cost exponential in the size of the graph and in the larger label. In this paper we present a deterministic rendezvous algorithm with cost polynomial in the size of the graph and in the length of the smaller label. Hence we decrease the cost exponentially in the size of the graph and doubly exponentially in the labels of agents. As an application of our rendezvous algorithm we solve several fundamental problems involving teams of unknown size larger than 1 of labeled agents moving asynchronously in unknown networks. Among them are the following problems: team size, in which every agent has to find the total number of agents, leader election, in which all agents have to output the label of a single agent, perfect renaming in which all agents have to adopt new different labels from the set 1,...,k , where k is the number of agents, and gossiping, in which each agent has initially a piece of information (value) and all agents have to output all the values. Using our rendezvous algorithm we solve all these problems at cost polynomial in the size of the graph and in the smallest length of all labels of participating agents.",
"",
"Consider a set of @math identical mobile computational entities in the plane, called robots, operating in Look-Compute-Move cycles, without any means of direct communication. The Gathering Problem is the primitive task of all entities gathering in finite time at a point not fixed in advance, without any external control. The problem has been extensively studied in the literature under a variety of strong assumptions (e.g., synchronicity of the cycles, instantaneous movements, complete memory of the past, common coordinate system, etc.). In this paper we consider the setting without those assumptions, that is, when the entities are oblivious (i.e., they do not remember results and observations from previous cycles), disoriented (i.e., have no common coordinate system), and fully asynchronous (i.e., no assumptions exist on timing of cycles and activities within a cycle). The existing algorithmic contributions for such robots are limited to solutions for @math or for restricted sets of initial configura...",
"Two mobile agents (robots) with distinct labels have to meet in an arbitrary, possibly infinite, unknown connected graph or in an unknown connected terrain in the plane. Agents are modeled as points, and the route of each of them only depends on its label and on the unknown environment. The actual walk of each agent also depends on an asynchronous adversary that may arbitrarily vary the speed of the agent, stop it, or even move it back and forth, as long as the walk of the agent in each segment of its route is continuous, does not leave it and covers all of it. Meeting in a graph means that both agents must be at the same time in some node or in some point inside an edge of the graph, while meeting in a terrain means that both agents must be at the same time in some point of the terrain. Does there exist a deterministic algorithm that allows any two agents to meet in any unknown environment in spite of this very powerful adversary? We give deterministic rendezvous algorithms for agents starting at arbitrary nodes of any anonymous connected graph (finite or infinite) and for agents starting at any interior points with rational coordinates in any closed region of the plane with path-connected interior. While our algorithms work in a very general setting - agents can, indeed, meet almost everywhere - we show that none of the above few limitations imposed on the environment can be removed. On the other hand, our algorithm also guarantees the following approximate rendezvous for agents starting at arbitrary interior points of a terrain as above: agents will eventually get at an arbitrarily small positive distance from each other.",
"Two mobile agents (robots) having distinct labels and located in nodes of an unknown anonymous connected graph have to meet. We consider the asynchronous version of this well-studied rendezvous problem and we seek fast deterministic algorithms for it. Since in the asynchronous setting, meeting at a node, which is normally required in rendezvous, is in general impossible, we relax the demand by allowing meeting of the agents inside an edge as well. The measure of performance of a rendezvous algorithm is its cost: for a given initial location of agents in a graph, this is the number of edge traversals of both agents until rendezvous is achieved. If agents are initially situated at a distance D in an infinite line, we show a rendezvous algorithm with cost O(D|Lmin|2) when D is known and O((D + |Lmax|)3) if D is unknown, where |Lmin| and |Lmax| are the lengths of the shorter and longer label of the agents, respectively. These results still hold for the case of the ring of unknown size, but then we also give an optimal algorithm of cost O(n|Lmin|), if the size n of the ring is known, and of cost O(n|Lmax|), if it is unknown. For arbitrary graphs, we show that rendezvous is feasible if an upper bound on the size of the graph is known and we give an optimal algorithm of cost O(D|Lmin|) if the topology of the graph and the initial positions are known to agents.",
"In this paper we study the problem of gathering a collection of identical oblivious mobile robots in the same location of the plane. Previous investigations have focused mostly on the unlimited visibility setting, where each robot can always see all the others regardless of their distance.In the more difficult and realistic setting where the robots have limited visibility, the existing algorithmic results are only for convergence (towards a common point, without ever reaching it) and only for semi-synchronous environments, where robots' movements are assumed to be performed instantaneously.In contrast, we study this problem in a totally asynchronous setting, where robots' actions, computations, and movements require a finite but otherwise unpredictable amount of time. We present a protocol that allows anonymous oblivious robots with limited visibility to gather in the same location in finite time, provided they have orientation (i.e., agreement on a coordinate system).Our result indicates that, with respect to gathering, orientation is at least as powerful as instantaneous movements."
]
} |
1407.1428 | 2952911535 | Two mobile agents, starting from different nodes of an @math -node network at possibly different times, have to meet at the same node. This problem is known as rendezvous. Agents move in synchronous rounds using a deterministic algorithm. In each round, an agent decides to either remain idle or to move to one of the adjacent nodes. Each agent has a distinct integer label from the set @math , which it can use in the execution of the algorithm, but it does not know the label of the other agent. If @math is the distance between the initial positions of the agents, then @math is an obvious lower bound on the time of rendezvous. However, if each agent has no initial knowledge other than its label, time @math is usually impossible to achieve. We study the minimum amount of information that has to be available a priori to the agents to achieve rendezvous in optimal time @math . This information is provided to the agents at the start by an oracle knowing the entire instance of the problem, i.e., the network, the starting positions of the agents, their wake-up rounds, and both of their labels. The oracle helps the agents by providing them with the same binary string called advice, which can be used by the agents during their navigation. The length of this string is called the size of advice. Our goal is to find the smallest size of advice which enables the agents to meet in time @math . We show that this optimal size of advice is @math . The upper bound is proved by constructing an advice string of this size, and providing a natural rendezvous algorithm using this advice that works in time @math for all networks. The matching lower bound, which is the main contribution of this paper, is proved by exhibiting classes of networks for which it is impossible to achieve rendezvous in time @math with smaller advice. | Providing nodes or agents with arbitrary kinds of information that can be used to perform network tasks more efficiently has been proposed in @cite_41 @cite_36 @cite_39 @cite_4 @cite_19 @cite_5 @cite_2 @cite_15 @cite_17 @cite_27 @cite_9 @cite_23 @cite_38 @cite_11 @cite_31 @cite_7 @cite_10 . This approach was referred to as algorithms with advice . Advice is given either to nodes of the network or to mobile agents performing some network task. In the first case, instead of advice, the term informative labeling schemes is sometimes used. Several authors studied the minimum size of advice required to solve network problems in an efficient way. | {
"cite_N": [
"@cite_38",
"@cite_31",
"@cite_11",
"@cite_4",
"@cite_7",
"@cite_41",
"@cite_36",
"@cite_9",
"@cite_39",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"1983693678",
"2056295140",
"",
"2046334554",
"2174013141",
"2025590344",
"1519256469",
"2181598850",
"",
"2109659895",
"",
"2034501275",
"1971694274",
"1975011672",
"2038319432",
"2045446569",
"1975595616"
],
"abstract": [
"We study deterministic broadcasting in radio networks in the recently introduced framework of network algorithms with advice. We concentrate on the problem of trade-offs between the number of bits of information (size of advice) available to nodes and the time in which broadcasting can be accomplished. In particular, we ask what is the minimum number of bits of information that must be available to nodes of the network, in order to broadcast very fast. For networks in which constant time broadcast is possible under a complete knowledge of the network we give a tight answer to the above question: O(n) bits of advice are sufficient but o(n) bits are not, in order to achieve constant broadcasting time in all these networks. This is in sharp contrast with geometric radio networks of constant broadcasting time: we show that in these networks a constant number of bits suffices to broadcast in constant time. For arbitrary radio networks we present a broadcasting algorithm whose time is inverse-proportional to the size of the advice.",
"This paper addresses the problem of locally verifying global properties. Several natural questions are studied, such as “how expensive is local verification?” and more specifically, “how expensive is local verification compared to computation?” A suitable model is introduced in which these questions are studied in terms of the number of bits a vertex needs to communicate. The model includes the definition of a proof labeling scheme (a pair of algorithms- one to assign the labels, and one to use them to verify that the global property holds). In addition, approaches are presented for the efficient construction of schemes, and upper and lower bounds are established on the bit complexity of schemes for multiple basic problems. The paper also studies the role and cost of unique identities in terms of impossibility and complexity, in the context of proof labeling schemes. Previous studies on related questions deal with distributed algorithms that simultaneously compute a configuration and verify that this configuration has a certain desired property. It turns out that this combined approach enables the verification to be less costly sometimes, since the configuration is typically generated so as to be easily verifiable. In contrast, our approach separates the configuration design from the verification. That is, it first generates the desired configuration without bothering with the need to verify it, and then handles the task of constructing a suitable verification scheme. Our approach thus allows for a more modular design of algorithms, and has the potential to aid in verifying properties even when the original design of the structures for maintaining them was done without verification in mind.",
"",
"We study the problem of the amount of information required to draw a complete or a partial map of a graph with unlabeled nodes and arbitrarily labeled ports. A mobile agent, starting at any node of an unknown connected graph and walking in it, has to accomplish one of the following tasks: draw a complete map of the graph, i.e., find an isomorphic copy of it including port numbering, or draw a partial map, i.e., a spanning tree, again with port numbering. The agent executes a deterministic algorithm and cannot mark visited nodes in any way. None of these map drawing tasks is feasible without any additional information, unless the graph is a tree. Hence we investigate the minimum number of bits of information (minimum size of advice) that has to be given to the agent to complete these tasks. It turns out that this minimum size of advice depends on the number n of nodes or the number m of edges of the graph, and on a crucial parameter @m, called the multiplicity of the graph, which measures the number of nodes that have an identical view of the graph. We give bounds on the minimum size of advice for both above tasks. For @m=1 our bounds are asymptotically tight for both tasks and show that the minimum size of advice is very small. For @m>1 the minimum size of advice increases abruptly. In this case our bounds are asymptotically tight for topology recognition and asymptotically almost tight for spanning tree construction.",
"[L. Blin, P. Fraigniaud, N. Nisse, S. Vial, Distributing chasing of network intruders, in: 13th Colloquium on Structural Information and Communication Complexity, SIROCCO, in: LNCS, vol. 4056, Springer-Verlag, 2006, pp. 70-84] introduced a new measure of difficulty for a distributed task in a network. The smallest number of bits of advice of a distributed problem is the smallest number of bits of information that has to be available to nodes in order to accomplish the task efficiently. Our paper deals with the number of bits of advice required to perform efficiently the graph searching problem in a distributed setting. In this variant of the problem, all searchers are initially placed at a particular node of the network. The aim of the team of searchers is to clear a contaminated graph in a monotone connected way, i.e., the cleared part of the graph is permanently connected, and never decreases while the search strategy is executed. Moreover, the clearing of the graph must be performed using the optimal number of searchers, i.e. the minimum number of searchers sufficient to clear the graph in a monotone connected way in a centralized setting. We show that the minimum number of bits of advice permitting the monotone connected and optimal clearing of a network in a distributed setting is @Q(nlogn), where n is the number of nodes of the network. More precisely, we first provide a labelling of the vertices of any graph G, using a total of O(nlogn) bits, and a protocol using this labelling that enables the optimal number of searchers to clear G in a monotone connected distributed way. Then, we show that this number of bits of advice is optimal: any distributed protocol requires @W(nlogn) bits of advice to clear a network in a monotone connected way, using an optimal number of searchers.",
"We consider the following problem. Given a rooted tree @math , label the nodes of @math in the most compact way such that, given the labels of two nodes @math and @math , one can determine in constant time, by looking only at the labels, whether @math is ancestor of @math . The best known labeling scheme is rather straightforward and uses labels of length at most @math bits each, where @math is the number of nodes in the tree. Our main result in this paper is a labeling scheme with maximum label length @math . Our motivation for studying this problem is enhancing the performance of web search engines. In the context of this application each indexed document is a tree, and the labels of all trees are maintained in main memory. Therefore even small improvements in the maximum label length are important.",
"We address the problem of labeling the nodes of a tree such that one can determine the identifier of the least common ancestor of any two nodes by looking only at their labels. This problem has application in routing and in distributed computing in peer-to-peer networks. A labeling scheme using i¾?(log2n)-bit labels has been previously presented by Peleg. By engineering this scheme, we obtain a variety of data structures with the same asymptotic performances. We conduct a thorough experimental evaluation of all these data structures. Our results clearly show which variants achieve the best performances in terms of space usage, construction time, and query time.",
"Topology recognition is one of the fundamental distributed tasks in networks. Each node of an anonymous network has to deterministically produce an isomorphic copy of the underlying graph, with all ports correctly marked. This task is usually unfeasible without any a priori information. Such information can be provided to nodes as advice. An oracle knowing the network can give a possibly different string of bits to each node, and all nodes must reconstruct the network using this advice, after a given number of rounds of communication. During each round each node can exchange arbitrary messages with all its neighbors and perform arbitrary local computations. The time of completing topology recognition is the number of rounds it takes, and the size of advice is the maximum length of a string given to nodes. We investigate tradeoffs between the time in which topology recognition is accomplished and the minimum size of advice that has to be given to nodes. We provide upper and lower bounds on the minimum size of advice that is sufficient to perform topology recognition in a given time, in the class of all graphs of size n and diameter D≤αn, for any constant α<1. In most cases, our bounds are asymptotically tight. More precisely, if the allotted time is D-k, where 0<k≤D, then the optimal size of advice is i¾?n 2 logn D-k+1. If the allotted time is D, then this optimal size is i¾?n logn. If the allotted time is D+k, where 0<k≤D 2, then the optimal size of advice is i¾?1+logn k. The only remaining gap between our bounds is for time D+k, where D 2<k≤D. In this time interval our upper bound remains O1+logn k, while the lower bound that holds for any time is 1. This leaves a gap if D∈ologn. Finally, we show that for time 2D+1, one bit of advice is both necessary and sufficient. Our results show how sensitive is the minimum size of advice to the time allowed for topology recognition: allowing just one round more, from D to D+1, decreases exponentially the advice needed to accomplish this task.",
"",
"We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as advice. The advice is a function, defined by the online algorithm, of the whole request sequence. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical model of complete lack of information regarding the future. We are interested in the impact of such advice on the competitive ratio, and in particular, in the relation between the size b of the advice, measured in terms of bits of information per request, and the (improved) competitive ratio. Since b=0 corresponds to the classical online model, and b=@?log|A|@?, where A is the algorithm's action space, corresponds to the optimal (offline) one, our model spans a spectrum of settings ranging from classical online algorithms to offline ones. In this paper we propose the above model and illustrate its applicability by considering two of the most extensively studied online problems, namely, metrical task systems (MTS) and the k-server problem. For MTS we establish tight (up to constant factors) upper and lower bounds on the competitive ratio of deterministic and randomized online algorithms with advice for any choice of 1@?b@?@Q(logn), where n is the number of states in the system: we prove that any randomized online algorithm for MTS has competitive ratio @W(log(n) b) and we present a deterministic online algorithm for MTS with competitive ratio O(log(n) b). For the k-server problem we construct a deterministic online algorithm for general metric spaces with competitive ratio k^O^(^1^ ^b^) for any choice of @Q(1)@?b@?logk.",
"",
"We consider the problem of labeling the nodes of a graph in a way that will allow one to compute the distance between any two nodes directly from their labels (without using any additional information). Our main interest is in the minimal length of labels needed in different cases. We obtain upper and lower bounds for several interesting families of graphs. In particular, our main results are the following. For general graphs, we show that the length needed is Θ(n). For trees, we show that the length needed is Θ(log2 n). For planar graphs, we show an upper bound of O(√nlogn) and a lower bound of Ω(n1 3). For bounded degree graphs, we show a lower bound of Ω(√n). The upper bounds for planar graphs and for trees follow by a more general upper bound for graphs with a r(n)-separator. The two lower bounds, however, are obtained by two different arguments that may be interesting in their own right. We also show some lower bounds on the length of the labels, even if it is only required that distances be approximated to a multiplicative factor s. For example, we show that for general graphs the required length is Ω(n) for every s < 3. We also consider the problem of the time complexity of the distance function once the labels are computed. We show that there are graphs with optimal labels of length 3 log n, such that if we use any labels with fewer than n bits per label, computing the distance function requires exponential time. A similar result is obtained for planar and bounded degree graphs.",
"We study the amount of knowledge about a communication network that must be given to its nodes in order to efficiently disseminate information. Our approach is quantitative: we investigate the minimum total number of bits of information (minimum size of advice) that has to be available to nodes, regardless of the type of information provided. We compare the size of advice needed to perform broadcast and wakeup (the latter is a broadcast in which nodes can transmit only after getting the source information), both using a linear number of messages (which is optimal). We show that the minimum size of advice permitting the wakeup with a linear number of messages in an n-node network, is @Q(nlogn), while the broadcast with a linear number of messages can be achieved with advice of size O(n). We also show that the latter size of advice is almost optimal: no advice of size o(n) can permit to broadcast with a linear number of messages. Thus an efficient wakeup requires strictly more information about the network than an efficient broadcast.",
"We study the problem of the amount of information (advice) about a graph that must be given to its nodes in order to achieve fast distributed computations. The required size of the advice enables to measure the information sensitivity of a network problem. A problem is information sensitive if little advice is enough to solve the problem rapidly (i.e., much faster than in the absence of any advice), whereas it is information insensitive if it requires giving a lot of information to the nodes in order to ensure fast computation of the solution. In this paper, we study the information sensitivity of distributed graph coloring.",
"We study the amount of knowledge about the network that is required in order to efficiently solve a task concerning this network. The impact of available information on the efficiency of solving network problems, such as communication or exploration, has been investigated before but assumptions concerned availability of particular items of information about the network, such as the size, the diameter, or a map of the network. In contrast, our approach is quantitative: we investigate the minimum number of bits of information (bits of advice) that has to be given to an algorithm in order to perform a task with given efficiency. We illustrate this quantitative approach to available knowledge by the task of tree exploration. A mobile entity (robot) has to traverse all edges of an unknown tree, using as few edge traversals as possible. The quality of an exploration algorithm A is measured by its competitive ratio, i.e., by comparing its cost (number of edge traversals) to the length of the shortest path containing all edges of the tree. Depth-First-Search has competitive ratio 2 and, in the absence of any information about the tree, no algorithm can beat this value. We determine the minimum number of bits of advice that has to be given to an exploration algorithm in order to achieve competitive ratio strictly smaller than 2. Our main result establishes an exact threshold number of bits of advice that turns out to be roughly loglogD, where D is the diameter of the tree. More precisely, for any constant c, we construct an exploration algorithm with competitive ratio smaller than 2, using at most loglogD-c bits of advice, and we show that every algorithm using loglogD-g(D) bits of advice, for any function g unbounded from above, has competitive ratio at least 2.",
"Let G = (V,E) be an undirected weighted graph with vVv = n and vEv = m. Let k ≥ 1 be an integer. We show that G = (V,E) can be preprocessed in O(kmn1 k) expected time, constructing a data structure of size O(kn1p1 k), such that any subsequent distance query can be answered, approximately, in O(k) time. The approximate distance returned is of stretch at most 2k−1, that is, the quotient obtained by dividing the estimated distance by the actual distance lies between 1 and 2k−1. A 1963 girth conjecture of Erdos, implies that Ω(n1p1 k) space is needed in the worst case for any real stretch strictly smaller than 2kp1. The space requirement of our algorithm is, therefore, essentially optimal. The most impressive feature of our data structure is its constant query time, hence the name \"oracle\". Previously, data structures that used only O(n1p1 k) space had a query time of Ω(n1 k).Our algorithms are extremely simple and easy to implement efficiently. They also provide faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs.",
"We use the recently introduced advising scheme framework for measuring the difficulty of locally distributively computing a Minimum Spanning Tree (MST). An (m,t)-advising scheme for a distributed problem P is a way, for every possible input I of P, to provide an \"advice\" (i.e., a bit string) about I to each node so that: (1) the maximum size of the advices is at most m bits, and (2) the problem P can be solved distributively in at most t rounds using the advices as inputs. In case of MST, the output returned by each node of a weighted graph G is the edge leading to its parent in some rooted MST T of G. Clearly, there is a trivial (log n,0)-advising scheme for MST (each node is given the local port number of the edge leading to the root of some MST T), and it is known that any (0,t)-advising scheme satisfies t ≥ Ω (√n). Our main result is the construction of an (O(1),O(log n))-advising scheme for MST. That is, by only giving a constant number of bits of advice to each node, one can decrease exponentially the distributed computation time of MST in arbitrary graph, compared to algorithms dealing with the problem in absence of any a priori information. We also consider the average size of the advices. On the one hand, we show that any (m,0)-advising scheme for MST gives advices of average size Ω(log n). On the other hand we design an (m,1)-advising scheme for MST with advices of constant average size, that is one round is enough to decrease the average size of the advices from log(n) to constant."
]
} |
1407.1151 | 2952986702 | Hashing has proven a valuable tool for large-scale information retrieval. Despite much success, existing hashing methods optimize over simple objectives such as the reconstruction error or graph Laplacian related loss functions, instead of the performance evaluation criteria of interest---multivariate performance measures such as the AUC and NDCG. Here we present a general framework (termed StructHash) that allows one to directly optimize multivariate performance measures. The resulting optimization problem can involve exponentially or infinitely many variables and constraints, which is more challenging than standard structured output learning. To solve the StructHash optimization problem, we use a combination of column generation and cutting-plane techniques. We demonstrate the generality of StructHash by applying it to ranking prediction and image retrieval, and show that it outperforms a few state-of-the-art hashing methods. | To obtain a richer representation, kernelized LSH @cite_19 was proposed, which randomly samples training data as support vectors, and randomly draws the dual coefficients from a Gaussian distribution. extended Kulis and Grauman's work to kernelized supervised hashing (KSH) @cite_12 by learning the dual coefficients instead. @cite_25 employed ensembles of decision trees as the hash functions. Nonetheless, all of these methods do not directly optimize the multivariate performance measures of interest. We formulate hash codes learning as a structured output learning problem, in order to directly optimize a wide variety of evaluation measures. | {
"cite_N": [
"@cite_19",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"2153273131",
"1992371516"
],
"abstract": [
"",
"Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the Hamming space. Non-linear hash functions have demonstrated their advantage over linear ones due to their powerful generalization capability. In the literature, kernel functions are typically used to achieve non-linearity in hashing, which achieve encouraging retrieval perfor- mance at the price of slow evaluation and training time. Here we propose to use boosted decision trees for achieving non-linearity in hashing, which are fast to train and evalu- ate, hence more suitable for hashing with high dimensional data. In our approach, we first propose sub-modular for- mulations for the hashing binary code inference problem and an efficient GraphCut based block search method for solving large-scale inference. Then we learn hash func- tions by training boosted decision trees to fit the binary codes. Experiments demonstrate that our proposed method significantly outperforms most state-of-the-art methods in retrieval precision and training time. Especially for high- dimensional data, our method is orders of magnitude faster than many methods in terms of training time.",
"Recent years have witnessed the growing popularity of hashing in large-scale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance or often incur cumbersome model training. In this paper, we propose a novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing. The idea is to map the data to compact binary codes whose Hamming distances are minimized on similar pairs and simultaneously maximized on dissimilar pairs. Our approach is distinct from prior works by utilizing the equivalence between optimizing the code inner products and the Hamming distances. This enables us to sequentially and efficiently train the hash functions one bit at a time, yielding very short yet discriminative codes. We carry out extensive experiments on two image benchmarks with up to one million samples, demonstrating that our approach significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors, with accuracy gains ranging from 13 to 46 ."
]
} |
1407.1151 | 2952986702 | Hashing has proven a valuable tool for large-scale information retrieval. Despite much success, existing hashing methods optimize over simple objectives such as the reconstruction error or graph Laplacian related loss functions, instead of the performance evaluation criteria of interest---multivariate performance measures such as the AUC and NDCG. Here we present a general framework (termed StructHash) that allows one to directly optimize multivariate performance measures. The resulting optimization problem can involve exponentially or infinitely many variables and constraints, which is more challenging than standard structured output learning. To solve the StructHash optimization problem, we use a combination of column generation and cutting-plane techniques. We demonstrate the generality of StructHash by applying it to ranking prediction and image retrieval, and show that it outperforms a few state-of-the-art hashing methods. | We are primarily inspired by recent advances in learning to rank such as the metric learning method in @cite_9 , which directly optimizes several different ranking measures. We aim to learn hash functions, which leads to a very different learning task preventing directly applying techniques in @cite_9 . We are also inspired by the recent column generation based hashing method, column generation hashing (CGH) @cite_17 , which iteratively learns hash functions using column generation. However, their method optimizes the conventional classification-related loss, which is much simpler than the multivariate loss that we are interested in here. Moreover, the optimization of CGH relies on all triplet constraints while our method is able to use much less number of constraints without sacrificing the performance. | {
"cite_N": [
"@cite_9",
"@cite_17"
],
"mid": [
"2158139921",
"2949478753"
],
"abstract": [
"We study metric learning as a problem of information retrieval. We present a general metric learning algorithm, based on the structural SVM framework, to learn a metric such that rankings of data induced by distance from a query can be optimized against various ranking measures, such as AUC, Precision-at-k, MRR, MAP or NDCG. We demonstrate experimental results on standard classification data sets, and a large-scale online dating recommendation problem.",
"Fast nearest neighbor searching is becoming an increasingly important tool in solving many large-scale problems. Recently a number of approaches to learning data-dependent hash functions have been developed. In this work, we propose a column generation based method for learning data-dependent hash functions on the basis of proximity comparison information. Given a set of triplets that encode the pairwise proximity comparison information, our method learns hash functions that preserve the relative comparison relationships in the data as well as possible within the large-margin learning framework. The learning procedure is implemented using column generation and hence is named CGHash. At each iteration of the column generation procedure, the best hash function is selected. Unlike most other hashing methods, our method generalizes to new data points naturally; and has a training objective which is convex, thus ensuring that the global optimum can be identified. Experiments demonstrate that the proposed method learns compact binary codes and that its retrieval performance compares favorably with state-of-the-art methods when tested on a few benchmark datasets."
]
} |
1407.1151 | 2952986702 | Hashing has proven a valuable tool for large-scale information retrieval. Despite much success, existing hashing methods optimize over simple objectives such as the reconstruction error or graph Laplacian related loss functions, instead of the performance evaluation criteria of interest---multivariate performance measures such as the AUC and NDCG. Here we present a general framework (termed StructHash) that allows one to directly optimize multivariate performance measures. The resulting optimization problem can involve exponentially or infinitely many variables and constraints, which is more challenging than standard structured output learning. To solve the StructHash optimization problem, we use a combination of column generation and cutting-plane techniques. We demonstrate the generality of StructHash by applying it to ranking prediction and image retrieval, and show that it outperforms a few state-of-the-art hashing methods. | Our framework is built on the structured SVM @cite_11 , which has been applied to many applications for complex structured output prediction, e.g., image segmentation, action recognition and so on. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2429914308"
],
"abstract": [
"Learning general functional dependencies is one of the main goals in machine learning. Recent progress in kernel-based methods has focused on designing flexible and powerful input representations. This paper addresses the complementary issue of problems involving complex outputs such as multiple dependent output variables and structured output spaces. We propose to generalize multiclass Support Vector Machine learning in a formulation that involves features extracted jointly from inputs and outputs. The resulting optimization problem is solved efficiently by a cutting plane algorithm that exploits the sparseness and structural decomposition of the problem. We demonstrate the versatility and effectiveness of our method on problems ranging from supervised grammar learning and named-entity recognition, to taxonomic text classification and sequence alignment."
]
} |
1407.1338 | 2951827590 | Local differential privacy has recently surfaced as a strong measure of privacy in contexts where personal information remains private even from data analysts. Working in a setting where both the data providers and data analysts want to maximize the utility of statistical analyses performed on the released data, we study the fundamental trade-off between local differential privacy and utility. This trade-off is formulated as a constrained optimization problem: maximize utility subject to local differential privacy constraints. We introduce a combinatorial family of extremal privatization mechanisms, which we call staircase mechanisms, and show that it contains the optimal privatization mechanisms for a broad class of information theoretic utilities such as mutual information and @math -divergences. We further prove that for any utility function and any privacy level, solving the privacy-utility maximization problem is equivalent to solving a finite-dimensional linear program, the outcome of which is the optimal staircase mechanism. However, solving this linear program can be computationally expensive since it has a number of variables that is exponential in the size of the alphabet the data lives in. To account for this, we show that two simple privatization mechanisms, the binary and randomized response mechanisms, are universally optimal in the low and high privacy regimes, and well approximate the intermediate regime. | Our work is closely related to the recent work of @cite_10 where an upper bound on @math was derived under the same local differential privacy setting. Precisely, Duchi et. al. proved that the KL-divergence maximization problem in is at most @math . This bound was further used to provide a minimax bound on statistical estimation using information theoretic converse techniques such as Fano's and Le Cam's inequalities. Such tradeoffs also provide tools for comparing various notions of privacy . | {
"cite_N": [
"@cite_10"
],
"mid": [
"2053801139"
],
"abstract": [
"Working under local differential privacy-a model of privacy in which data remains private even from the statistician or learner-we study the tradeoff between privacy guarantees and the utility of the resulting statistical estimators. We prove bounds on information-theoretic quantities, including mutual information and Kullback-Leibler divergence, that influence estimation rates as a function of the amount of privacy preserved. When combined with minimax techniques such as Le Cam's and Fano's methods, these inequalities allow for a precise characterization of statistical rates under local privacy constraints. In this paper, we provide a treatment of two canonical problem families: mean estimation in location family models and convex risk minimization. For these families, we provide lower and upper bounds for estimation of population quantities that match up to constant factors, giving privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds."
]
} |
1407.1338 | 2951827590 | Local differential privacy has recently surfaced as a strong measure of privacy in contexts where personal information remains private even from data analysts. Working in a setting where both the data providers and data analysts want to maximize the utility of statistical analyses performed on the released data, we study the fundamental trade-off between local differential privacy and utility. This trade-off is formulated as a constrained optimization problem: maximize utility subject to local differential privacy constraints. We introduce a combinatorial family of extremal privatization mechanisms, which we call staircase mechanisms, and show that it contains the optimal privatization mechanisms for a broad class of information theoretic utilities such as mutual information and @math -divergences. We further prove that for any utility function and any privacy level, solving the privacy-utility maximization problem is equivalent to solving a finite-dimensional linear program, the outcome of which is the optimal staircase mechanism. However, solving this linear program can be computationally expensive since it has a number of variables that is exponential in the size of the alphabet the data lives in. To account for this, we show that two simple privatization mechanisms, the binary and randomized response mechanisms, are universally optimal in the low and high privacy regimes, and well approximate the intermediate regime. | In a similar spirit, we are also interested in maximizing information theoretic quantities under local differential privacy. We generalize the results of @cite_10 , and provide stronger results in the sense that we @math consider a broader class of information theoretic utilities; @math provide explicit constructions for the optimal mechanisms; and @math recover the existing result of [Theorem 1] DJW13 (with a stronger condition on @math ). | {
"cite_N": [
"@cite_10"
],
"mid": [
"2053801139"
],
"abstract": [
"Working under local differential privacy-a model of privacy in which data remains private even from the statistician or learner-we study the tradeoff between privacy guarantees and the utility of the resulting statistical estimators. We prove bounds on information-theoretic quantities, including mutual information and Kullback-Leibler divergence, that influence estimation rates as a function of the amount of privacy preserved. When combined with minimax techniques such as Le Cam's and Fano's methods, these inequalities allow for a precise characterization of statistical rates under local privacy constraints. In this paper, we provide a treatment of two canonical problem families: mean estimation in location family models and convex risk minimization. For these families, we provide lower and upper bounds for estimation of population quantities that match up to constant factors, giving privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds."
]
} |
1407.1338 | 2951827590 | Local differential privacy has recently surfaced as a strong measure of privacy in contexts where personal information remains private even from data analysts. Working in a setting where both the data providers and data analysts want to maximize the utility of statistical analyses performed on the released data, we study the fundamental trade-off between local differential privacy and utility. This trade-off is formulated as a constrained optimization problem: maximize utility subject to local differential privacy constraints. We introduce a combinatorial family of extremal privatization mechanisms, which we call staircase mechanisms, and show that it contains the optimal privatization mechanisms for a broad class of information theoretic utilities such as mutual information and @math -divergences. We further prove that for any utility function and any privacy level, solving the privacy-utility maximization problem is equivalent to solving a finite-dimensional linear program, the outcome of which is the optimal staircase mechanism. However, solving this linear program can be computationally expensive since it has a number of variables that is exponential in the size of the alphabet the data lives in. To account for this, we show that two simple privatization mechanisms, the binary and randomized response mechanisms, are universally optimal in the low and high privacy regimes, and well approximate the intermediate regime. | Optimal differentially private mechanisms are known only in a few cases. @cite_13 showed that the geometric noise adding mechanism is optimal (under a Bayesian setting) for monotone utility functions under count queries (sensitivity one). This was generalized by Geng et. al. (for a worst-case input setting) who proposed a family of mechanisms and proved its optimality for monotone utility functions under queries with arbitrary sensitivity . The family of optimal mechanisms was called staircase mechanisms because for any @math and any neighboring @math and @math , the ratio of @math to @math takes one of three possible values @math , @math , or @math . Since the optimal mechanisms we develop also have an identical property, we retain the same nomenclature. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2090593019"
],
"abstract": [
"A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Publishing fully accurate information maximizes utility while minimizing privacy, while publishing random noise accomplishes the opposite. Privacy can be rigorously quantified using the framework of differential privacy, which requires that a mechanism's output distribution is nearly the same whether a given database row is included. The goal of this paper is to formulate and provide strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a symmetric and monotone loss function). Our main result is the following: for each fixed count query and differential privacy level, there is a geometric mechanism @math ---a discrete variant of the simple and well-studi..."
]
} |
1407.1490 | 2403326704 | This paper proposes a novel face recognition algorithm based on large-scale supervised hierarchical feature learning. The approach consists of two parts: hierarchical feature learning and large-scale model learning. The hierarchical feature learning searches feature in three levels of granularity in a supervised way. First, face images are modeled by receptive field theory, and the representation is an image with many channels of Gaussian receptive maps. We activate a few most distinguish channels by supervised learning. Second, the face image is further represented by patches of picked channels, and we search from the over-complete patch pool to activate only those most discriminant patches. Third, the feature descriptor of each patch is further projected to lower dimension subspace with discriminant subspace analysis. Learned feature of activated patches are concatenated to get a full face representation.A linear classifier is learned to separate face pairs from same subjects and different subjects. As the number of face pairs are extremely large, we introduce ADMM (alternative direction method of multipliers) to train the linear classifier on a computing cluster. Experiments show that more training samples will bring notable accuracy improvement. We conduct experiments on FRGC and LFW. Results show that the proposed approach outperforms existing algorithms under the same protocol notably. Besides, the proposed approach is small in memory footprint, and low in computing cost, which makes it suitable for embedded applications. | Feature learning in face recognition can be categorized into three major categories. First, subspace methods try to cast the raw feature to a discriminant subspace, which are dominant methods in face recognition researches in the past two decades. Typical algorithms include eigen-faces @cite_38 , Fisherfaces @cite_34 , Lapacianfaces @cite_43 , and kernel subspace methods @cite_8 . Subspace methods suffer from large projection matrix. Suppose subspace methods project @math -dimensional raw feature to @math -dimensional discriminant subspace, the projection matrix is of size @math . The raw feature dimension @math is usually very high. For instance, the dimension of Gabor features may be as high as tens of thousands @cite_44 @cite_28 . As a result, large projection matrix yields not only large memory footprint, but also high computing cost. Some researches utilize the divided and conquer strategy, which divides the full feature vector into several blocks, and solve projection in each block @cite_42 @cite_7 . This paper applies subspace analysis to each patch descriptor, which can be viewed as a special case of block subspace analysis. However, the computing complexity and memory footprint of patch-level subspace analysis is usually one order less than that of block subspace methods. | {
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_42",
"@cite_44",
"@cite_43",
"@cite_34"
],
"mid": [
"2138451337",
"",
"2153054748",
"",
"2119077463",
"2144119003",
"2117553576",
""
],
"abstract": [
"We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as \"eigenfaces,\" because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.",
"",
"Principal Component Analysis and Fisher Linear Discriminant methods have demonstrated their success in face detection, recognition, and tracking. The representation in these subspace methods is based on second order statistics of the image set, and does not address higher order statistical dependencies such as the relationships among three or more pixels. Recently Higher Order Statistics and Independent Component Analysis (ICA) have been used as informative low dimensional representations for visual recognition. In this paper, we investigate the use of Kernel Principal Component Analysis and Kernel Fisher Linear Discriminant for learning low dimensional representations for face recognition, which we call Kernel Eigenface and Kernel Fisherface methods. While Eigenface and Fisherface methods aim to find projection directions based on the second order correlation of samples, Kernel Eigenface and Kernel Fisherface methods provide generalizations which take higher order correlations into account. We compare the performance of kernel methods with Eigenface, Fisherface and ICA-based methods for face recognition with variation in pose, scale, lighting and expression. Experimental results show that kernel methods provide better representations and achieve lower error rates for face recognition.",
"",
"In the literature of psychophysics and neurophysiology, many studies have shown that both global and local features are crucial for face representation and recognition. This paper proposes a novel face recognition method which exploits both global and local discriminative features. In this method, global features are extracted from the whole face images by keeping the low-frequency coefficients of Fourier transform, which we believe encodes the holistic facial information, such as facial contour. For local feature extraction, Gabor wavelets are exploited considering their biological relevance. After that, Fisher's linear discriminant (FLD) is separately applied to the global Fourier features and each local patch of Gabor features. Thus, multiple FLD classifiers are obtained, each embodying different facial evidences for face recognition. Finally, all these classifiers are combined to form a hierarchical ensemble classifier. We evaluate the proposed method using two large-scale face databases: FERET and FRGC version 2.0. Experiments show that the results of our method are impressively better than the best known results with the same evaluation protocol.",
"This paper presents a novel pattern recognition framework by capitalizing on dimensionality increasing techniques. In particular, the framework integrates Gabor image representation, a novel multiclass kernel Fisher analysis (KFA) method, and fractional power polynomial models for improving pattern recognition performance. Gabor image representation, which increases dimensionality by incorporating Gabor filters with different scales and orientations, is characterized by spatial frequency, spatial locality, and orientational selectivity for coping with image variabilities such as illumination variations. The KFA method first performs nonlinear mapping from the input space to a high-dimensional feature space, and then implements the multiclass Fisher discriminant analysis in the feature space. The significance of the nonlinear mapping is that it increases the discriminating power of the KFA method, which is linear in the feature space but nonlinear in the input space. The novelty of the KFA method comes from the fact that 1) it extends the two-class kernel Fisher methods by addressing multiclass pattern classification problems and 2) it improves upon the traditional generalized discriminant analysis (GDA) method by deriving a unique solution (compared to the GDA solution, which is not unique). The fractional power polynomial models further improve performance of the proposed pattern recognition framework. Experiments on face recognition using both the FERET database and the FRGC (face recognition grand challenge) databases show the feasibility of the proposed framework. In particular, experimental results using the FERET database show that the KFA method performs better than the GDA method and the fractional power polynomial models help both the KFA method and the GDA method improve their face recognition performance. Experimental results using the FRGC databases show that the proposed pattern recognition framework improves face recognition performance upon the BEE baseline algorithm and the LDA-based baseline algorithm by large margins.",
"We propose an appearance-based face recognition method called the Laplacianface approach. By using locality preserving projections (LPP), the face images are mapped into a face subspace for analysis. Different from principal component analysis (PCA) and linear discriminant analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.",
""
]
} |
1407.1490 | 2403326704 | This paper proposes a novel face recognition algorithm based on large-scale supervised hierarchical feature learning. The approach consists of two parts: hierarchical feature learning and large-scale model learning. The hierarchical feature learning searches feature in three levels of granularity in a supervised way. First, face images are modeled by receptive field theory, and the representation is an image with many channels of Gaussian receptive maps. We activate a few most distinguish channels by supervised learning. Second, the face image is further represented by patches of picked channels, and we search from the over-complete patch pool to activate only those most discriminant patches. Third, the feature descriptor of each patch is further projected to lower dimension subspace with discriminant subspace analysis. Learned feature of activated patches are concatenated to get a full face representation.A linear classifier is learned to separate face pairs from same subjects and different subjects. As the number of face pairs are extremely large, we introduce ADMM (alternative direction method of multipliers) to train the linear classifier on a computing cluster. Experiments show that more training samples will bring notable accuracy improvement. We conduct experiments on FRGC and LFW. Results show that the proposed approach outperforms existing algorithms under the same protocol notably. Besides, the proposed approach is small in memory footprint, and low in computing cost, which makes it suitable for embedded applications. | Second is the methods that learn a mid-level representation in an unsupervised way, which include learning descriptor @cite_19 , deep feature learning @cite_16 and sparse representation @cite_17 , etc, The unsupervised learning is able to find common pattern from big data. The proposed approach borrows the hierarchical architecture from these methods, but learns each feature layer in a supervised way. | {
"cite_N": [
"@cite_19",
"@cite_16",
"@cite_17"
],
"mid": [
"",
"2005286252",
"2129812935"
],
"abstract": [
"",
"Most modern face recognition systems rely on a feature representation given by a hand-crafted image descriptor, such as Local Binary Patterns (LBP), and achieve improved performance by combining several such representations. In this paper, we propose deep learning as a natural source for obtaining additional, complementary representations. To learn features in high-resolution images, we make use of convolutional deep belief networks. Moreover, to take advantage of global structure in an object class, we develop local convolutional restricted Boltzmann machines, a novel convolutional learning model that exploits the global structure by not assuming stationarity of features across the image, while maintaining scalability and robustness to small misalignments. We also present a novel application of deep learning to descriptors other than pixel intensity values, such as LBP. In addition, we compare performance of networks trained using unsupervised learning against networks with random filters, and empirically show that learning weights not only is necessary for obtaining good multilayer representations, but also provides robustness to the choice of the network architecture parameters. Finally, we show that a recognition system using only representations obtained from deep learning can achieve comparable accuracy with a system using a combination of hand-crafted image descriptors. Moreover, by combining these representations, we achieve state-of-the-art results on a real-world face verification database.",
"We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims."
]
} |
1407.0785 | 2216242130 | We relate the @math -adic heights of generalized Heegner cycles to the derivative of a @math -adic @math -function attached to a pair @math , where @math is an ordinary weight @math newform and @math is an unramified imaginary quadratic Hecke character of infinity type @math , with @math . This generalizes the @math -adic Gross-Zagier formula in the case @math due to Perrin-Riou (in weight two) and Nekov 'a r (in higher weight). | There has been much recent work on the connections between Heegner cycles and @math -adic @math -functions. Generalized Heegner cycles were first studied in @cite_3 , where their Abel-Jacobi classes were related to the special value (not the derivative) of a different Rankin-Selberg @math -adic @math -function. Brooks extended these results to Shimura curves over @math @cite_15 and recently Liu, Zhang, and Zhang proved a general formula for arbitrary totally real fields @cite_18 . In @cite_19 , Disegni computes @math -adic heights of Heegner points on Shimura curves, generalizing the weight 2 formula of Perrin-Riou for modular curves. Kobayashi @cite_12 extended Perrin-Riou's height formula to the supersingular case. Our work is the first (as far as we know) to study @math -adic heights of generalized Heegner cycles. | {
"cite_N": [
"@cite_18",
"@cite_3",
"@cite_19",
"@cite_15",
"@cite_12"
],
"mid": [
"2173089136",
"2953316345",
"1835694908",
"2039036530",
"1984039258"
],
"abstract": [
"In this article, we study @math -adic torus periods for certain @math -adic valued functions on Shimura curves coming from classical origin. We prove a @math -adic Waldspurger formula for these periods, generalizing the recent work of Bertolini, Darmon, and Prasanna. In pursuing such a formula, we construct a new anti-cyclotomic @math -adic @math -function of Rankin-Selberg type. At a character of positive weight, the @math -adic @math -function interpolates the central critical value of the complex Rankin-Selberg @math -function. Its value at a Dirichlet character, which is outside the range of interpolation, essentially computes the corresponding @math -adic torus period.",
"In this paper, we deduce the vanishing of Selmer groups for the Rankin-Selberg convolution of a cusp form with a theta series of higher weight from the nonvanishing of the associated @math -value, thus establishing the rank 0 case of the Bloch-Kato conjecture in these cases. Our methods are based on the connection between Heegner cycles and @math -adic @math -functions, building upon recent work of Bertolini, Darmon and Prasanna, and on an extension of Kolyvagin's method of Euler systems to the anticyclotomic setting. In the course of the proof, we also obtain a higher weight analogue of Mazur's conjecture (as proven in weight 2 by Cornut-Vatsal), and as a consequence of our results, we deduce from Nekovar's work a proof of the parity conjecture in this setting.",
"Let @math be a primitive Hilbert modular form of parallel weight @math and level @math for the totally real field @math , and let @math be a rational prime coprime to @math . If @math is ordinary at @math and @math is a CM extension of @math of relative discriminant @math prime to @math , we give an explicit construction of the @math -adic Rankin-Selberg @math -function @math . When the sign of its functional equation is @math , we show, under the assumption that all primes @math are principal ideals of @math which split in @math , that its central derivative is given by the @math -adic height of a Heegner point on the abelian variety @math associated with @math . This @math -adic Gross--Zagier formula generalises the result obtained by Perrin-Riou when @math and @math satisfies the so-called Heegner condition. We deduce applications to both the @math -adic and the classical Birch and Swinnerton-Dyer conjectures for @math .",
"We construct \"generalized Heegner cycles\" on a variety fibered over a Shimura curve, defined over a number field. We show that their images under the p-adic Abel-Jacobi map coincide with the values (outside the range of interpolation) of a p-adic L-function L-p(f, chi) which interpolates special values of the Rankin-Selberg convolution of a fixed newform f and a theta-series theta(chi) attached to an unramified Hecke character of an imaginary quadratic field K. This generalizes previous work of Bertolini, Darmon, and Prasanna, which demonstrated a similar result in the case of modular curves. Our main tool is the theory of Serre-Tate coordinates, which yields p-adic expansions of modular forms at CM points, replacing the role of q-expansions in computations on modular curves.",
"Let p be a prime number and let E be an elliptic curve defined over ℚ of conductor N. Let K be an imaginary quadratic field with discriminant prime to pN such that all prime factors of N split in K. B. Perrin-Riou established the p-adic Gross-Zagier formula that relates the first derivative of the p-adic L-function of E over K to the p-adic height of the Heegner point for K when E has good ordinary reduction at p. In this article, we prove the p-adic Gross-Zagier formula of E for the cyclotomic ℤ p -extension at good supersingular prime p. Our result has an application for the full Birch and Swinnerton-Dyer conjecture. Suppose that the analytic rank of E over ℚ is 1 and assume that the Iwasawa main conjecture is true for all good primes and the p-adic height pairing is not identically equal to zero for all good ordinary primes, then our result implies the full Birch and Swinnerton-Dyer conjecture up to bad primes. In particular, if E has complex multiplication and of analytic rank 1, the full Birch and Swinnerton-Dyer conjecture is true up to a power of bad primes and 2."
]
} |
1407.0623 | 2294432969 | Our approach locates the temporal positions of tags in videos at the keyframe level.We deal with a scenario in which there is no pre-defined set of tags.We report experiments about the use of different web sources (Flickr, Google, Bing).We show state-of-the-art results on DUT-WEBV, a large dataset of YouTube videos.We show results in a real-world scenario to perform open vocabulary tag annotation. Tagging of visual content is becoming more and more widespread as web-based services and social networks have popularized tagging functionalities among their users. These user-generated tags are used to ease browsing and exploration of media collections, e.g.?using tag clouds, or to retrieve multimedia content. However, not all media are equally tagged by users. Using the current systems is easy to tag a single photo, and even tagging a part of a photo, like a face, has become common in sites like Flickr and Facebook. On the other hand, tagging a video sequence is more complicated and time consuming, so that users just tag the overall content of a video. In this paper we present a method for automatic video annotation that increases the number of tags originally provided by users, and localizes them temporally, associating tags to keyframes. Our approach exploits collective knowledge embedded in user-generated tags and web sources, and visual similarity of keyframes and images uploaded to social sites like YouTube and Flickr, as well as web sources like Google and Bing. Given a keyframe, our method is able to select "on the fly" from these visual sources the training exemplars that should be the most relevant for this test sample, and proceeds to transfer labels across similar images. Compared to existing video tagging approaches that require training classifiers for each tag, our system has few parameters, is easy to implement and can deal with an open vocabulary scenario. We demonstrate the approach on tag refinement and localization on DUT-WEBV, a large dataset of web videos, and show state-of-the-art results. | Probably the most important effort in semantic video annotation is TRECVID @cite_0 , an evaluation campaign with the goal to promote progress in content-based retrieval from digital video archives. Recently, online videos have also attracted the attention of researchers @cite_26 @cite_49 @cite_37 @cite_15 @cite_11 , since millions of videos are available on the web and they include rich metadata such as title, comments and user tags. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_0",
"@cite_49",
"@cite_15",
"@cite_11"
],
"mid": [
"1967664674",
"1994694952",
"2062903088",
"2023520647",
"1981781955",
"2054337388"
],
"abstract": [
"We consider the problem of content-based automated tag learning. In particular, we address semantic variations (sub-tags) of the tag. Each video in the training set is assumed to be associated with a sub-tag label, and we treat this sub-tag label as latent information. A latent learning framework based on LogitBoost is proposed, which jointly considers both the tag label and the latent sub-tag label. The latent sub-tag information is exploited in our framework to assist the learning of our end goal, i.e., tag prediction. We use the cowatch information to initialize the learning process. In experiments, we show that the proposed method achieves significantly better results over baselines on a large-scale testing video set which contains about 50 million YouTube videos.",
"Automatic categorization of videos in a Web-scale unconstrained collection such as YouTube is a challenging task. A key issue is how to build an effective training set in the presence of missing, sparse or noisy labels. We propose to achieve this by first manually creating a small labeled set and then extending it using additional sources such as related videos, searched videos, and text-based webpages. The data from such disparate sources has different properties and labeling quality, and thus fusing them in a coherent fashion is another practical challenge. We propose a fusion framework in which each data source is first combined with the manually-labeled set independently. Then, using the hierarchical taxonomy of the categories, a Conditional Random Field (CRF) based fusion strategy is designed. Based on the final fused classifier, category labels are predicted for the new videos. Extensive experiments on about 80K videos from 29 most frequent categories in YouTube show the effectiveness of the proposed method for categorizing large-scale wild Web videos1.",
"The TREC Video Retrieval Evaluation (TRECVid)is an international benchmarking activity to encourage research in video information retrieval by providing a large test collection, uniform scoring procedures, and a forum for organizations 1 interested in comparing their results. TRECVid completed its fifth annual cycle at the end of 2005 and in 2006 TRECVid will involve almost 70 research organizations, universities and other consortia. Throughout its existence, TRECVid has benchmarked both interactive and automatic manual searching for shots from within a video corpus,automatic detection of a variety of semantic and low-level video features, shot boundary detection and the detection of story boundaries in broadcast TV news. This paper will give an introduction to information retrieval (IR) evaluation from both a user and a system perspective, high-lighting that system evaluation is by far the most prevalent type of evaluation carried out. We also include a summary of TRECVid as an example of a system evaluation bench-marking campaign and this allows us to discuss whether such campaigns are a good thing or a bad thing. There are arguments for and against these campaigns and we present some of them in the paper concluding that on balance they have had a very positive impact on research progress.",
"Tagging of multimedia content is becoming more and more widespread as web 2.0 sites, like Flickr and Facebook for images, YouTube and Vimeo for videos, have popularized tagging functionalities among their users. These user-generated tags are used to retrieve multimedia content, and to ease browsing and exploration of media collections, e.g. using tag clouds. However, not all media are equally tagged by users: using the current browsers is easy to tag a single photo, and even tagging a part of a photo, like a face, has become common in sites like Flickr and Facebook; on the other hand tagging a video sequence is more complicated and time consuming, so that users just tag the overall content of a video. In this paper we present a system for automatic video annotation that increases the number of tags originally provided by users, and localizes them temporally, associating tags to shots. This approach exploits collective knowledge embedded in tags and Wikipedia, and visual similarity of keyframes and images uploaded to social sites like YouTube and Flickr.",
"Action recognition on large categories of unconstrained videos taken from the web is a very challenging problem compared to datasets like KTH (6 actions), IXMAS (13 actions), and Weizmann (10 actions). Challenges like camera motion, different viewpoints, large interclass variations, cluttered background, occlusions, bad illumination conditions, and poor quality of web videos cause the majority of the state-of-the-art action recognition approaches to fail. Also, an increased number of categories and the inclusion of actions with high confusion add to the challenges. In this paper, we propose using the scene context information obtained from moving and stationary pixels in the key frames, in conjunction with motion features, to solve the action recognition problem on a large (50 actions) dataset with videos from the web. We perform a combination of early and late fusion on multiple features to handle the very large number of categories. We demonstrate that scene context is a very important feature to perform action recognition on very large datasets. The proposed method does not require any kind of video stabilization, person detection, or tracking and pruning of features. Our approach gives good performance on a large number of action categories; it has been tested on the UCF50 dataset with 50 action categories, which is an extension of the UCF YouTube Action (UCF11) dataset containing 11 action categories. We also tested our approach on the KTH and HMDB51 datasets for comparison.",
"Video sharing websites have recently become a tremendous video source, which is easily accessible without any costs. This has encouraged researchers in the action recognition field to construct action database exploiting Web sources. However Web sources are generally too noisy to be used directly as a recognition database. Thus building action database from Web sources has required extensive human efforts on manual selection of video parts related to specified actions. In this paper, we introduce a novel method to automatically extract video shots related to given action keywords from Web videos according to their metadata and visual features. First, we select relevant videos among tagged Web videos based on the relevance between their tags and the given keyword. After segmenting selected videos into shots, we rank these shots exploiting their visual features in order to obtain shots of interest as top ranked shots. Especially, we propose to adopt Web images and human pose matching method in shot ranking step and show that this application helps to boost more relevant shots to the top. This unsupervised method of ours only requires the provision of action keywords such as ''surf wave'' or ''bake bread'' at the beginn ing. We have made large-scale experiments on various kinds of human actions as well as non-human actions and obtained promising results."
]
} |
1407.0623 | 2294432969 | Our approach locates the temporal positions of tags in videos at the keyframe level.We deal with a scenario in which there is no pre-defined set of tags.We report experiments about the use of different web sources (Flickr, Google, Bing).We show state-of-the-art results on DUT-WEBV, a large dataset of YouTube videos.We show results in a real-world scenario to perform open vocabulary tag annotation. Tagging of visual content is becoming more and more widespread as web-based services and social networks have popularized tagging functionalities among their users. These user-generated tags are used to ease browsing and exploration of media collections, e.g.?using tag clouds, or to retrieve multimedia content. However, not all media are equally tagged by users. Using the current systems is easy to tag a single photo, and even tagging a part of a photo, like a face, has become common in sites like Flickr and Facebook. On the other hand, tagging a video sequence is more complicated and time consuming, so that users just tag the overall content of a video. In this paper we present a method for automatic video annotation that increases the number of tags originally provided by users, and localizes them temporally, associating tags to keyframes. Our approach exploits collective knowledge embedded in user-generated tags and web sources, and visual similarity of keyframes and images uploaded to social sites like YouTube and Flickr, as well as web sources like Google and Bing. Given a keyframe, our method is able to select "on the fly" from these visual sources the training exemplars that should be the most relevant for this test sample, and proceeds to transfer labels across similar images. Compared to existing video tagging approaches that require training classifiers for each tag, our system has few parameters, is easy to implement and can deal with an open vocabulary scenario. We demonstrate the approach on tag refinement and localization on DUT-WEBV, a large dataset of web videos, and show state-of-the-art results. | A vast amount of previous work has addressed the problem of online video tagging using a simple classification approach with multiple categories and classes. Siersdorfer al @cite_10 proposed a method that combines visual analysis and content redundancy, strongly present in social sharing websites, to improve the quality of annotations associated to online videos. They first detect the duplication and overlap between two videos, and then propagate the video-level tags using automatic tagging rules. Similarly Zhao al @cite_22 investigated techniques which allow annotation of web videos from a data-driven perspective. Their system implements a tag recommendation algorithm that uses the tagging behaviors in the pool of retrieved near-duplicate videos. | {
"cite_N": [
"@cite_10",
"@cite_22"
],
"mid": [
"2099968307",
"2144127483"
],
"abstract": [
"The analysis of the leading social video sharing platform YouTube reveals a high amount of redundancy, in the form of videos with overlapping or duplicated content. In this paper, we show that this redundancy can provide useful information about connections between videos. We reveal these links using robust content-based video analysis techniques and exploit them for generating new tag assignments. To this end, we propose different tag propagation methods for automatically obtaining richer video annotations. Our techniques provide the user with additional information about videos, and lead to enhanced feature representations for applications such as automatic data organization and search. Experiments on video clustering and classification as well as a user evaluation demonstrate the viability of our approach.",
"With the proliferation of Web 2.0 applications, user-supplied social tags are commonly available in social media as a means to bridge the semantic gap. On the other hand, the explosive expansion of social web makes an overwhelming number of web videos available, among which there exists a large number of near-duplicate videos. In this paper, we investigate techniques which allow effective annotation of web videos from a data-driven perspective. A novel classifier-free video annotation framework is proposed by first retrieving visual duplicates and then suggesting representative tags. The significance of this paper lies in the addressing of two timely issues for annotating query videos. First, we provide a novel solution for fast near-duplicate video retrieval. Second, based on the outcome of near-duplicate search, we explore the potential that the data-driven annotation could be successful when huge volume of tagged web videos is freely accessible online. Experiments on cross sources (annotating Google videos and Yahoo! videos using YouTube videos) and cross time periods (annotating YouTube videos using historical data) show the effectiveness and efficiency of the proposed classifier-free approach for web video tag annotation."
]
} |
1407.0623 | 2294432969 | Our approach locates the temporal positions of tags in videos at the keyframe level.We deal with a scenario in which there is no pre-defined set of tags.We report experiments about the use of different web sources (Flickr, Google, Bing).We show state-of-the-art results on DUT-WEBV, a large dataset of YouTube videos.We show results in a real-world scenario to perform open vocabulary tag annotation. Tagging of visual content is becoming more and more widespread as web-based services and social networks have popularized tagging functionalities among their users. These user-generated tags are used to ease browsing and exploration of media collections, e.g.?using tag clouds, or to retrieve multimedia content. However, not all media are equally tagged by users. Using the current systems is easy to tag a single photo, and even tagging a part of a photo, like a face, has become common in sites like Flickr and Facebook. On the other hand, tagging a video sequence is more complicated and time consuming, so that users just tag the overall content of a video. In this paper we present a method for automatic video annotation that increases the number of tags originally provided by users, and localizes them temporally, associating tags to keyframes. Our approach exploits collective knowledge embedded in user-generated tags and web sources, and visual similarity of keyframes and images uploaded to social sites like YouTube and Flickr, as well as web sources like Google and Bing. Given a keyframe, our method is able to select "on the fly" from these visual sources the training exemplars that should be the most relevant for this test sample, and proceeds to transfer labels across similar images. Compared to existing video tagging approaches that require training classifiers for each tag, our system has few parameters, is easy to implement and can deal with an open vocabulary scenario. We demonstrate the approach on tag refinement and localization on DUT-WEBV, a large dataset of web videos, and show state-of-the-art results. | A strong effort has been made to design effective methods for harvesting images and videos from the web to learn models of actions or events and use this knowledge to automatically annotate new videos. This idea follows similar successful approaches for image classification @cite_20 @cite_7 @cite_27 but it has been applied only for the particular case of single-label classification. To this end, a first attempt has been made by Ulges al @cite_52 who proposed to train a concept detection system on web videos from portals such as YouTube. A similar idea is presented in @cite_29 in which images collected from the web are used to learn representations of human actions and then this knowledge is used to automatically annotate actions in unconstrained videos. A main drawback of these works is that they require training classifiers for each label, and this procedure does not scale very well, especially on the web. Very recently, Kordumova al @cite_33 have also studied the problem of training detectors from social media, considering both image and video sources, obtaining state-of-the-art results in TRECVID 2013 and concluding that tagged images are preferable over tagged videos. | {
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_29",
"@cite_52",
"@cite_27",
"@cite_20"
],
"mid": [
"2136902972",
"1987496434",
"2120601899",
"2003621468",
"2134135198",
"2172191903"
],
"abstract": [
"Learning video concept detectors from social media sources, such as Flickr images and YouTube videos, has the potential to address a wide variety of concept queries for video search. While the potential has been recognized by many, and progress on the topic has been impressive, we argue that key questions crucial to know how to learn effective video concept detectors from social media examples? remain open. As an initial attempt to answer these questions, we conduct an experimental study using a video search engine which is capable of learning concept detectors from social media examples, be it socially tagged videos or socially tagged images. Within the video search engine we investigate three strategies for positive example selection, three negative example selection strategies and three learning strategies. The performance is evaluated on the challenging TRECVID 2012 benchmark consisting of 600 h of Internet video. From the experiments we derive four best practices: (1) tagged images are a better source for learning video concepts than tagged videos, (2) selecting tag relevant positive training examples is always beneficial, (3) selecting relevant negative examples is advantageous and should be treated differently for video and image sources, and (4) learning concept detectors with selected relevant training data before learning is better then incorporating the relevance during the learning process. The best practices within our video search engine lead to state-of-the-art performance in the TRECVID 2013 benchmark for concept detection without manually provided annotations.",
"Conventional supervised methods for image categorization rely on manually annotated (labeled) examples to learn good object models, which means their generality and scalability depends heavily on the amount of human effort available to help train them. We propose an unsupervised approach to construct discriminative models for categories specified simply by their names. We show that multiple-instance learning enables the recovery of robust category models from images returned by keyword-based search engines. By incorporating constraints that reflect the expected sparsity of true positive examples into a large-margin objective function, our approach remains accurate even when the available text annotations are imperfect and ambiguous. In addition, we show how to iteratively improve the learned classifier by automatically refining the representation of the ambiguously labeled examples. We demonstrate our method with benchmark datasets, and show that it performs well relative to both state-of-the-art unsupervised approaches and traditional fully supervised techniques.",
"This paper proposes a generic method for action recognition in uncontrolled videos. The idea is to use images collected from the Web to learn representations of actions and use this knowledge to automatically annotate actions in videos. Our approach is unsupervised in the sense that it requires no human intervention other than the text querying. Its benefits are two-fold: 1) we can improve retrieval of action images, and 2) we can collect a large generic database of action poses, which can then be used in tagging videos. We present experimental evidence that using action images collected from the Web, annotating actions is possible.",
"Concept detection is targeted at automatically labeling video content with semantic concepts appearing in it, like objects, locations, or activities. While concept detectors have become key components in many research prototypes for content-based video retrieval, their practical use is limited by the need for large-scale annotated training sets. To overcome this problem, we propose to train concept detectors on material downloaded from web-based video sharing portals like YouTube, such that training is based on tags given by users during upload, no manual annotation is required, and concept detection can scale up to thousands of concepts. On the downside, web video as training material is a complex domain, and the tags associated with it are weak and unreliable. Consequently, performance loss is to be expected when replacing high-quality state-of-the-art training sets with web video content. This paper presents a concept detection prototype named TubeTagger that utilizes YouTube content for an autonomous training. In quantitative experiments, we compare the performance when training on web video and on standard datasets from the literature. It is demonstrated that concept detection in web video is feasible, and that - when testing on YouTube videos - the YouTube-based detector outperforms the ones trained on standard training sets. By applying the YouTube-based prototype to datasets from the literature, we further demonstrate that: (1) If training annotations on the target domain are available, the resulting detectors significantly outperform the YouTube-based tagger. (2) If no annotations are available, the YouTube-based detector achieves comparable performance to the ones trained on standard datasets (moderate relative performance losses of 11.4 is measured) while offering the advantage of a fully automatic, scalable learning. (3) By enriching conventional training sets with online video material, performance improvements of 11.7 can be achieved when generalizing to domains unseen in training.",
"A well-built dataset is a necessary starting point for advanced computer vision research. It plays a crucial role in evaluation and provides a continuous challenge to state-of-the-art algorithms. Dataset collection is, however, a tedious and time-consuming task. This paper presents a novel automatic dataset collecting and model learning approach that uses object recognition techniques in an incremental method. The goal of this work is to use the tremendous resources of the web to learn robust object category models in order to detect and search for objects in real-world cluttered scenes. It mimics the human learning process of iteratively accumulating model knowledge and image examples. We adapt a non-parametric graphical model and propose an incremental learning framework. Our algorithm is capable of automatically collecting much larger object category datasets for 22 randomly selected classes from the Caltech 101 dataset. Furthermore, we offer not only more images in each object category dataset, but also a robust object model and meaningful image annotation. Our experiments show that OPTIMOL is capable of collecting image datasets that are superior to Caltech 101 and LabelMe.",
"Current approaches to object category recognition require datasets of training images to be manually prepared, with varying degrees of supervision. We present an approach that can learn an object category from just its name, by utilizing the raw output of image search engines available on the Internet. We develop a new model, TSI-pLSA, which extends pLSA (as applied to visual words) to include spatial information in a translation and scale invariant manner. Our approach can handle the high intra-class variability and large proportion of unrelated images returned by search engines. We evaluate tire models on standard test sets, showing performance competitive with existing methods trained on hand prepared datasets"
]
} |
1407.0623 | 2294432969 | Our approach locates the temporal positions of tags in videos at the keyframe level.We deal with a scenario in which there is no pre-defined set of tags.We report experiments about the use of different web sources (Flickr, Google, Bing).We show state-of-the-art results on DUT-WEBV, a large dataset of YouTube videos.We show results in a real-world scenario to perform open vocabulary tag annotation. Tagging of visual content is becoming more and more widespread as web-based services and social networks have popularized tagging functionalities among their users. These user-generated tags are used to ease browsing and exploration of media collections, e.g.?using tag clouds, or to retrieve multimedia content. However, not all media are equally tagged by users. Using the current systems is easy to tag a single photo, and even tagging a part of a photo, like a face, has become common in sites like Flickr and Facebook. On the other hand, tagging a video sequence is more complicated and time consuming, so that users just tag the overall content of a video. In this paper we present a method for automatic video annotation that increases the number of tags originally provided by users, and localizes them temporally, associating tags to keyframes. Our approach exploits collective knowledge embedded in user-generated tags and web sources, and visual similarity of keyframes and images uploaded to social sites like YouTube and Flickr, as well as web sources like Google and Bing. Given a keyframe, our method is able to select "on the fly" from these visual sources the training exemplars that should be the most relevant for this test sample, and proceeds to transfer labels across similar images. Compared to existing video tagging approaches that require training classifiers for each tag, our system has few parameters, is easy to implement and can deal with an open vocabulary scenario. We demonstrate the approach on tag refinement and localization on DUT-WEBV, a large dataset of web videos, and show state-of-the-art results. | Several methods have recently been proposed for unsupervised spatio-temporal segmentation of unconstrained videos @cite_2 @cite_42 @cite_3 . Hartmann al @cite_42 presented an object segmentation system applied to a large set of weakly and noisily tagged videos. They formulate this problem as learning weakly supervised classifiers for a set of independent spatio-temporal video segments in which the object seeds are refined using graphcut. Although this method shows promising results, the proposed system requires a high computational effort to process videos at a large scale. Similarly, Tang al @cite_3 have addressed keyframe segmentation in YouTube videos using a weakly supervised approach to segment semantic objects. The proposed method exploits negative video segments (i.e. those that are not related to the concept to be annotated) and their distance to the uncertain positive instances, based on the intuition that positive examples are less likely to be segments of the searched concept if they are near many negatives. Both these methods are able to classify each shot within the video either as coming from a particular concept (i.e. tag) or not, and they provide a rough tag-to-region assignment. | {
"cite_N": [
"@cite_42",
"@cite_3",
"@cite_2"
],
"mid": [
"122025198",
"2105297725",
"2030346542"
],
"abstract": [
"We propose to learn pixel-level segmentations of objects from weakly labeled (tagged) internet videos. Specifically, given a large collection of raw YouTube content, along with potentially noisy tags, our goal is to automatically generate spatiotemporal masks for each object, such as \"dog\", without employing any pre-trained object detectors. We formulate this problem as learning weakly supervised classifiers for a set of independent spatio-temporal segments. The object seeds obtained using segment-level classifiers are further refined using graphcuts to generate high-precision object masks. Our results, obtained by training on a dataset of 20,000 YouTube videos weakly tagged into 15 classes, demonstrate automatic extraction of pixel-level object masks. Evaluated against a ground-truthed subset of 50,000 frames with pixel-level annotations, we confirm that our proposed methods can learn good object masks just by watching YouTube.",
"The ubiquitous availability of Internet video offers the vision community the exciting opportunity to directly learn localized visual concepts from real-world imagery. Unfortunately, most such attempts are doomed because traditional approaches are ill-suited, both in terms of their computational characteristics and their inability to robustly contend with the label noise that plagues uncurated Internet content. We present CRANE, a weakly supervised algorithm that is specifically designed to learn under such conditions. First, we exploit the asymmetric availability of real-world training data, where small numbers of positive videos tagged with the concept are supplemented with large quantities of unreliable negative data. Second, we ensure that CRANE is robust to label noise, both in terms of tagged videos that fail to contain the concept as well as occasional negative videos that do. Finally, CRANE is highly parallelizable, making it practical to deploy at large scale without sacrificing the quality of the learned solution. Although CRANE is general, this paper focuses on segment annotation, where we show state-of-the-art pixel-level segmentation results on two datasets, one of which includes a training set of spatiotemporal segments from more than 20,000 videos.",
"We present an efficient and scalable technique for spatiotemporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a “region graph” over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, which are temporally coherent with stable region boundaries, and allows subsequent applications to choose from varying levels of granularity. We further improve segmentation quality by using dense optical flow to guide temporal connections in the initial graph. We also propose two novel approaches to improve the scalability of our technique: (a) a parallel out-of-core algorithm that can process volumes much larger than an in-core algorithm, and (b) a clip-based processing algorithm that divides the video into overlapping clips in time, and segments them successively while enforcing consistency. We demonstrate hierarchical segmentations on video shots as long as 40 seconds, and even support a streaming mode for arbitrarily long videos, albeit without the ability to process them hierarchically."
]
} |
1407.0623 | 2294432969 | Our approach locates the temporal positions of tags in videos at the keyframe level.We deal with a scenario in which there is no pre-defined set of tags.We report experiments about the use of different web sources (Flickr, Google, Bing).We show state-of-the-art results on DUT-WEBV, a large dataset of YouTube videos.We show results in a real-world scenario to perform open vocabulary tag annotation. Tagging of visual content is becoming more and more widespread as web-based services and social networks have popularized tagging functionalities among their users. These user-generated tags are used to ease browsing and exploration of media collections, e.g.?using tag clouds, or to retrieve multimedia content. However, not all media are equally tagged by users. Using the current systems is easy to tag a single photo, and even tagging a part of a photo, like a face, has become common in sites like Flickr and Facebook. On the other hand, tagging a video sequence is more complicated and time consuming, so that users just tag the overall content of a video. In this paper we present a method for automatic video annotation that increases the number of tags originally provided by users, and localizes them temporally, associating tags to keyframes. Our approach exploits collective knowledge embedded in user-generated tags and web sources, and visual similarity of keyframes and images uploaded to social sites like YouTube and Flickr, as well as web sources like Google and Bing. Given a keyframe, our method is able to select "on the fly" from these visual sources the training exemplars that should be the most relevant for this test sample, and proceeds to transfer labels across similar images. Compared to existing video tagging approaches that require training classifiers for each tag, our system has few parameters, is easy to implement and can deal with an open vocabulary scenario. We demonstrate the approach on tag refinement and localization on DUT-WEBV, a large dataset of web videos, and show state-of-the-art results. | An early version of the proposed approach was introduced in our preliminary conference papers @cite_4 @cite_49 . In this paper we made key modifications in the algorithm and obtained significant improvements in the results. Differently from our previous work we introduce multiple types of image sources for a more effective cross-media tag transfer; we design a vote weighting procedure based on visual similarity and the use of a temporal smoothing strategy which exploits the temporal continuity of a video; further, we show a better performance in terms both of precision and recall. Finally, large-scale experiments have been carried on using a new public dataset @cite_12 @cite_53 , allowing fair comparisons w.r.t. other methods. | {
"cite_N": [
"@cite_53",
"@cite_4",
"@cite_12",
"@cite_49"
],
"mid": [
"2070004190",
"2022730193",
"173660979",
"2023520647"
],
"abstract": [
"Numerous web videos associated with rich metadata are available on the Internet today. While such metadata like video tags bring us facilitation and opportunities for video search and multimedia content understanding, some challenges also arise due to the fact that those video tags are usually annotated at the video level, while many tags actually only describe parts of the video content. How to localize the relevant parts or frames of web video for given tags is the key to many applications and research tasks. In this paper we propose combining topic model and relevance filtering to localize relevant frames. Our method is designed in three steps. First, we apply relevance filtering to assign relevance scores to video frames and a raw relevant frame set is obtained by selecting the top ranked frames. Then, we separate the frames into topics by mining the underlying semantics using latent Dirichlet allocation and use the raw relevance set as validation set to select relevant topics. Finally, the topical relevances are used to refine the raw relevant frame set and the final results are obtained. Experiment results on two real web video databases validate the effectiveness of the proposed approach.",
"Nowadays, almost any web site that provides means for sharing user-generated multimedia content, like Flickr, Facebook, YouTube and Vimeo, has tagging functionalities to let users annotate the material that they want to share. The tags are then used to retrieve the uploaded content, and to ease browsing and exploration of these collections, e.g. using tag clouds. However, while tagging a single image is straightforward, and sites like Flickr and Facebook allow also to tag easily portions of the uploaded photos, tagging a video sequence is more cumbersome, so that users just tend to tag the overall content of a video. Moreover, the tagging process is completely manual, and often users tend to spend as few time as possible to annotate the material, resulting in a sparse annotation of the visual content. A semi-automatic process, that helps the users to tag a video sequence would improve the quality of annotations and thus the overall user experience. While research on image tagging has received a considerable attention in the latest years, there are still very few works that address the problem of automatically assigning tags to videos, locating them temporally within the video sequence. In this paper we present a system for video tag suggestion and temporal localization based on collective knowledge and visual similarity of frames. The algorithm suggests new tags that can be associated to a given keyframe exploiting the tags associated to videos and images uploaded to social sites like YouTube and Flickr and visual features.",
"Nowadays, numerous social videos have pervaded on the Web. Social web videos are characterized with the accompanying rich contextual information which describe the content of videos and thus greatly facilitate video search and browsing. Generally those context data such as tags are generated for the whole video, without temporal indication on when they actually appear in the video. However, many tags only describe parts of the video content. Therefore, tag localization, the process of assigning tags to the underlying relevant video segments or frames is gaining increasing research interests and a benchmark dataset for the fair evaluation of tag localization algorithms is highly desirable. In this paper, we describe and release a dataset called DUT-WEBV, which contains 1550 videos collected from YouTube portal by issuing 31 concepts as queries. These concepts cover a wide range of semantic aspects including scenes like “mountain”, events like “flood”, objects like “cows”, sites like “gas station”, and activities like “handshaking”, offering great challenges to the tag (i.e., concept) localization task. For each video of a tag, we carefully annotate the time durations when the tag appears in the video. Besides the video itself, the contextual information, such as thumbnail images, titles, and categories, is also provided. Together with this benchmark dataset, we present a baseline for tag localization using multiple instance learning approach. Finally, we discuss some open research issues for tag localization in web videos.",
"Tagging of multimedia content is becoming more and more widespread as web 2.0 sites, like Flickr and Facebook for images, YouTube and Vimeo for videos, have popularized tagging functionalities among their users. These user-generated tags are used to retrieve multimedia content, and to ease browsing and exploration of media collections, e.g. using tag clouds. However, not all media are equally tagged by users: using the current browsers is easy to tag a single photo, and even tagging a part of a photo, like a face, has become common in sites like Flickr and Facebook; on the other hand tagging a video sequence is more complicated and time consuming, so that users just tag the overall content of a video. In this paper we present a system for automatic video annotation that increases the number of tags originally provided by users, and localizes them temporally, associating tags to shots. This approach exploits collective knowledge embedded in tags and Wikipedia, and visual similarity of keyframes and images uploaded to social sites like YouTube and Flickr."
]
} |
1407.0622 | 1795554267 | Twitter as a new form of social media potentially contains useful information that opens new opportunities for content analysis on tweets. This paper examines the predictive power of Twitter regarding the US presidential election of 2012. For this study, we analyzed 32 million tweets regarding the US presidential election by employing a combination of machine learning techniques. We devised an advanced classifier for sentiment analysis in order to increase the accuracy of Twitter content analysis. We carried out our analysis by comparing Twitter results with traditional opinion polls. In addition, we used the Latent Dirichlet Allocation model to extract the underlying topical structure from the selected tweets. Our results show that we can determine the popularity of candidates by running sentiment analysis. We can also uncover candidates popularities in the US states by running the sentiment analysis algorithm on geo-tagged tweets. To the best of our knowledge, no previous work in the field has presented a systematic analysis of a considerable number of tweets employing a combination of analysis techniques by which we conducted this study. Thus, our results aptly suggest that Twitter as a well-known social medium is a valid source in predicting future events such as elections. This implies that understanding public opinions and trends via social media in turn allows us to propose a cost- and time-effective way not only for spreading and sharing information, but also for predicting future events. | A social medium Twitter service has become omnipresent with hyper-connectivity for social networking and content sharing. Facebook established a status update field in June 2006, but Twitter took status sharing between people to cell phones four months later @cite_24 . Since then, Twitter has grown exponentially beyond status sharing. For instance, in 2009 the number of Twitter users was 18.2 million, which was 1,448 per cent growth from the year of 2008 @cite_35 . Then, what made Twitter grow dramatically? One reason may be the simplicity of Twitter in using. For instance, while blogging needs decent writing skills and a large size of content to fill pages @cite_24 , Twitter originally developed for mobile phones @cite_35 restricts users to posting 140-character text messages, also known as tweets, to a network of others without technical requirement of reciprocity @cite_16 . This encourages more users to post, and which facilitates real-time diffusion of information @cite_16 . Thus, users can easily post and read tweets on the web, using different access methods, such as desktop computers, smartphones, and other devices @cite_35 . | {
"cite_N": [
"@cite_24",
"@cite_35",
"@cite_16"
],
"mid": [
"2040909284",
"2122305905",
"2084591134"
],
"abstract": [
"Despite the availability of the sensor and smart-phone devices to fulfill the ubiquitous computing vision, the-state-of-the-art falls short of this vision. We argue that the reason for this gap is the lack of an infrastructure to task utilize these devices for collaboration. We propose that microblogging services like Twitter can provide an \"open\" publish-subscribe infrastructure for sensors and smartphones, and pave the way for ubiquitous crowd-sourced sensing and collaboration applications. We design and implement a crowd-sourced sensing and collaboration system over Twitter, and showcase our system in the context of two applications: a crowd-sourced weather radar, and a participatory noise-mapping application. Our results from real-world Twitter experiments give insights into the feasibility of this approach and outline the research challenges in sensor smartphone integration to Twitter.",
"Social media technologies collapse multiple audiences into single contexts, making it difficult for people to use the same techniques online that they do to handle multiplicity in face-to-face conversation. This article investigates how content producers navigate ‘imagined audiences’ on Twitter. We talked with participants who have different types of followings to understand their techniques, including targeting different audiences, concealing subjects, and maintaining authenticity. Some techniques of audience management resemble the practices of ‘micro-celebrity’ and personal branding, both strategic self-commodification. Our model of the networked audience assumes a many-to-many communication through which individuals conceptualize an imagined audience evoked through their tweets.",
"We analyze the information credibility of news propagated through Twitter, a popular microblogging service. Previous research has shown that most of the messages posted on Twitter are truthful, but the service is also used to spread misinformation and false rumors, often unintentionally. On this paper we focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, we analyze microblog postings related to \"trending\" topics, and classify them as credible or not credible, based on features extracted from them. We use features from users' posting and re-posting (\"re-tweeting\") behavior, from the text of the posts, and from citations to external sources. We evaluate our methods using a significant number of human assessments about the credibility of items on a recent sample of Twitter postings. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70 to 80 ."
]
} |
1407.0622 | 1795554267 | Twitter as a new form of social media potentially contains useful information that opens new opportunities for content analysis on tweets. This paper examines the predictive power of Twitter regarding the US presidential election of 2012. For this study, we analyzed 32 million tweets regarding the US presidential election by employing a combination of machine learning techniques. We devised an advanced classifier for sentiment analysis in order to increase the accuracy of Twitter content analysis. We carried out our analysis by comparing Twitter results with traditional opinion polls. In addition, we used the Latent Dirichlet Allocation model to extract the underlying topical structure from the selected tweets. Our results show that we can determine the popularity of candidates by running sentiment analysis. We can also uncover candidates popularities in the US states by running the sentiment analysis algorithm on geo-tagged tweets. To the best of our knowledge, no previous work in the field has presented a systematic analysis of a considerable number of tweets employing a combination of analysis techniques by which we conducted this study. Thus, our results aptly suggest that Twitter as a well-known social medium is a valid source in predicting future events such as elections. This implies that understanding public opinions and trends via social media in turn allows us to propose a cost- and time-effective way not only for spreading and sharing information, but also for predicting future events. | Marwick argued that Facebook or Twitter users' imagined audience might be different from actual readers who would be interested in tweets and posts @cite_35 . Another account may lie in a big data fallacy that can be found in demographic bias in that users tend to be young, and big data itself may not be statistically representative of the whole population. For instance, Twitter users are predominantly males, but the rate of male users was found to have decreased, oppositely to the bias that the rate of male users would increase @cite_17 . Because of these reasons, some contend that the predictability of Twitter data does not guarantee generalization of its positive analysis results in the past @cite_6 . Having a specific conception of the users' online identity presentation, and understanding of Twitter users can help for advanced observations and predictions, since such an understanding can reduce biases of tweet-based analysis @cite_17 . While conducting this study, we could partly detect users' gender and resident cities. | {
"cite_N": [
"@cite_35",
"@cite_6",
"@cite_17"
],
"mid": [
"2122305905",
"2073979932",
"2167102709"
],
"abstract": [
"Social media technologies collapse multiple audiences into single contexts, making it difficult for people to use the same techniques online that they do to handle multiplicity in face-to-face conversation. This article investigates how content producers navigate ‘imagined audiences’ on Twitter. We talked with participants who have different types of followings to understand their techniques, including targeting different audiences, concealing subjects, and maintaining authenticity. Some techniques of audience management resemble the practices of ‘micro-celebrity’ and personal branding, both strategic self-commodification. Our model of the networked audience assumes a many-to-many communication through which individuals conceptualize an imagined audience evoked through their tweets.",
"The power to predict outcomes based on Twitter data is greatly exaggerated, especially for political elections.",
"Every second, the thoughts and feelings of millions of people across the world are recorded in the form of 140-character tweets using Twitter. However, despite the enormous potential presented by this remarkable data source, we still do not have an understanding of the Twitter population itself: Who are the Twitter users? How representative of the overall population are they? In this paper, we take the first steps towards answering these questions by analyzing data on a set of Twitter users representing over 1 of the U.S. population. We develop techniques that allow us to compare the Twitter population to the U.S. population along three axes (geography, gender, and race ethnicity), and find that the Twitter population is a highly non-uniform sample of the population."
]
} |
1407.0622 | 1795554267 | Twitter as a new form of social media potentially contains useful information that opens new opportunities for content analysis on tweets. This paper examines the predictive power of Twitter regarding the US presidential election of 2012. For this study, we analyzed 32 million tweets regarding the US presidential election by employing a combination of machine learning techniques. We devised an advanced classifier for sentiment analysis in order to increase the accuracy of Twitter content analysis. We carried out our analysis by comparing Twitter results with traditional opinion polls. In addition, we used the Latent Dirichlet Allocation model to extract the underlying topical structure from the selected tweets. Our results show that we can determine the popularity of candidates by running sentiment analysis. We can also uncover candidates popularities in the US states by running the sentiment analysis algorithm on geo-tagged tweets. To the best of our knowledge, no previous work in the field has presented a systematic analysis of a considerable number of tweets employing a combination of analysis techniques by which we conducted this study. Thus, our results aptly suggest that Twitter as a well-known social medium is a valid source in predicting future events such as elections. This implies that understanding public opinions and trends via social media in turn allows us to propose a cost- and time-effective way not only for spreading and sharing information, but also for predicting future events. | There is a claim that there is no correlations between social media data and electoral predictions since Twitter data anlayzed by using lexicon-based sentiment analysis did not predict the 2010 US congressional elections @cite_12 . Likewise, Google Trends was not predictive for the 2008 and the 2010 US elections @cite_11 . However, their claims may not generalize since it is based on comparing the candidates who won the 2008 congressional elections to the candidates whose names were searched frequently on Google or appeared on Twitter. In order to obtain more credible results, more systematic methods of detecting would be necessary whether or not candidates names were used positively, negatively, or neutrally, instead of focusing on the frequency rates of the candidates' names who simply were searched on Google or appeared on Twitter. In addition, the tweets simply mentioning political parties are not sufficient either @cite_29 . Other reasons that cause prediction failure are inadequate demographic data, existence of spammers, propagandists, and fake accounts in social media @cite_12 . | {
"cite_N": [
"@cite_29",
"@cite_12",
"@cite_11"
],
"mid": [
"2127925090",
"",
"1792679419"
],
"abstract": [
"To what extend can one use Twitter in opinion polls for political elections? Merely counting Twitter messages mentioning political party names is no guarantee for obtaining good election predictions. By improving the quality of the document collection and by performing sentiment analysis, predictions based on entity counts in tweets can be considerably improved, and become nearly as good as traditionally obtained opinion polls.",
"",
"In recent years several researchers have reported that the volume of Google Trends and Twitter chat over time can be used to predict several kinds of social and consumer metrics. From the success of new movies before their release to the marketability of consumer goods to the prediction of voting results in the recent 2009 German elections, Google Trends and Twitter message volume have been treated as indispensable tools for not only recording current social trends, but even for predicting the future. This is surprising, given the significant differences in the demographics of voters and people who use social networks and Web tools. But is there some underline logic behind these predictions or are they simply a matter of luck? With this work we wanted to test their predictive power against the US elections. One could argue that, following the previous research literature, and given the high utilization that the Web and the social networks have in the US, Google Trends and Twitter volume may be able to predict the outcomes of the US Congressional elections. In this paper we report that Google Trends was, actually, not a good predictor of both the 2008 and 2010 elections, and we offer some explanation on why this may be the case. On a forthcoming paper we report on our analysis on Twitter."
]
} |
1407.0622 | 1795554267 | Twitter as a new form of social media potentially contains useful information that opens new opportunities for content analysis on tweets. This paper examines the predictive power of Twitter regarding the US presidential election of 2012. For this study, we analyzed 32 million tweets regarding the US presidential election by employing a combination of machine learning techniques. We devised an advanced classifier for sentiment analysis in order to increase the accuracy of Twitter content analysis. We carried out our analysis by comparing Twitter results with traditional opinion polls. In addition, we used the Latent Dirichlet Allocation model to extract the underlying topical structure from the selected tweets. Our results show that we can determine the popularity of candidates by running sentiment analysis. We can also uncover candidates popularities in the US states by running the sentiment analysis algorithm on geo-tagged tweets. To the best of our knowledge, no previous work in the field has presented a systematic analysis of a considerable number of tweets employing a combination of analysis techniques by which we conducted this study. Thus, our results aptly suggest that Twitter as a well-known social medium is a valid source in predicting future events such as elections. This implies that understanding public opinions and trends via social media in turn allows us to propose a cost- and time-effective way not only for spreading and sharing information, but also for predicting future events. | Surprisingly, based on the analysis of 50 million tweets they collected, proved that even noisy information in social media such as Twitter can be used as a proxy for public opinions @cite_33 . They found that Twitter data provides more than factual information about public opinions on a specific topic, yielding better results than information from the prediction market. For instance, Twitter data can be used as social sensors for real-time events and to forecast the box-office revenues for movies @cite_13 . The rate at which tweets posted about a particular movie has a strong positive correlation with the box-office gross. In addition, Twitter as a platform can reflect offline political sentiment validly @cite_7 , and it can mirror consumer confidence. Interestingly, even a mere number of Twitter data dominated by a small number of heavy users predicted an election result, such as the results of the 2011 Irish general election @cite_30 , and even came close to traditional election polls @cite_7 . This aptly suggests that social media data, especially Twitter, can replace the costly- and time-intensive polling methods @cite_14 . Thus, we argue that social media can be used as a credible source for predicting the near future. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_33",
"@cite_7",
"@cite_13"
],
"mid": [
"137217113",
"2122369144",
"",
"1590495275",
"2015186536"
],
"abstract": [
"The body of content available on Twitter undoubtedly contains a diverse range of political insight and commentary. But, to what extent is this representative of an electorate? Can we model political sentiment effectively enough to capture the voting intentions of a nation during an election capaign? We use the recent Irish General Election as a case study for investigating the potential to model political sentiment through mining of social media. Our approach combines sentiment analysis using supervised learning and volume-based measures. We evaluate against the conventional election polls and the final election result. We find that social analytics using both volume-based measures and sentiment analysis are predictive and wemake a number of observations related to the task of monitoring public sentiment during an election campaign, including examining a variety of sample sizes, time periods as well as methods for qualitatively exploring the underlying content.",
"We connect measures of public opinion measured from polls with sentiment measured from text. We analyze several surveys on consumer confidence and political opinion over the 2008 to 2009 period, and find they correlate to sentiment word frequencies in contemporaneous Twitter messages. While our results vary across datasets, in several cases the correlations are as high as 80 , and capture important large-scale trends. The results highlight the potential of text streams as a substitute and supplement for traditional polling.",
"",
"Twitter is a microblogging website where users read and write millions of short messages on a variety of topics every day. This study uses the context of the German federal election to investigate whether Twitter is used as a forum for political deliberation and whether online messages on Twitter validly mirror offline political sentiment. Using LIWC text analysis software, we conducted a content-analysis of over 100,000 messages containing a reference to either a political party or a politician. Our results show that Twitter is indeed used extensively for political deliberation. We find that the mere number of messages mentioning a party reflects the election result. Moreover, joint mentions of two parties are in line with real world political ties and coalitions. An analysis of the tweets’ political sentiment demonstrates close correspondence to the parties' and politicians’ political positions indicating that the content of Twitter messages plausibly reflects the offline political landscape. We discuss the use of microblogging message content as a valid indicator of political sentiment and derive suggestions for further research.",
"In recent years, social media has become ubiquitous and important for social networking and content sharing. And yet, the content that is generated from these websites remains largely untapped. In this paper, we demonstrate how social media content can be used to predict real-world outcomes. In particular, we use the chatter from Twitter.com to forecast box-office revenues for movies. We show that a simple model built from the rate at which tweets are created about particular topics can outperform market-based predictors. We further demonstrate how sentiments extracted from Twitter can be utilized to improve the forecasting power of social media."
]
} |
1407.0549 | 23255398 | Applications written in low-level languages without type or memory safety are especially prone to memory corruption. Attackers gain code execution capabilities through such applications despite all currently deployed defenses by exploiting memory corruption vulnerabilities. Control-Flow Integrity (CFI) is a promising defense mechanism that restricts open control-flow transfers to a static set of well-known locations. We present Lockdown, an approach to dynamic CFI that protects legacy, binary-only executables and libraries. Lockdown adaptively learns the control-flow graph of a running process using information from a trusted dynamic loader. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks. Our prototype implementation shows that dynamic CFI results in low performance overhead. | Control-Flow Integrity (CFI) @cite_21 and its extension XFI @cite_47 restrict the control-flow of an application at runtime to a statically determined control-flow graph. Each indirect control-flow transfer (an indirect call, indirect jump, or function return) is allowed to transfer control at runtime only to the set of statically determined targets of this code location. | {
"cite_N": [
"@cite_47",
"@cite_21"
],
"mid": [
"2148686658",
"2109219878"
],
"abstract": [
"XFI is a comprehensive protection system that offers both flexible access control and fundamental integrity guarantees, at any privilege level and even for legacy code in commodity systems. For this purpose, XFI combines static analysis with inline software guards and a two-stack execution model. We have implemented XFI for Windows on the x86 architecture using binary rewriting and a simple, stand-alone verifier; the implementation's correctness depends on the verifier, but not on the rewriter. We have applied XFI to software such as device drivers and multimedia codecs. The resulting modules function safely within both kernel and user-mode address spaces, with only modest enforcement overheads.",
"Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions."
]
} |
1407.0549 | 23255398 | Applications written in low-level languages without type or memory safety are especially prone to memory corruption. Attackers gain code execution capabilities through such applications despite all currently deployed defenses by exploiting memory corruption vulnerabilities. Control-Flow Integrity (CFI) is a promising defense mechanism that restricts open control-flow transfers to a static set of well-known locations. We present Lockdown, an approach to dynamic CFI that protects legacy, binary-only executables and libraries. Lockdown adaptively learns the control-flow graph of a running process using information from a trusted dynamic loader. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks. Our prototype implementation shows that dynamic CFI results in low performance overhead. | Second, the initial upper bound for precision is possibly limited through the implementation of the control-flow checks ( @cite_37 list common limitations in CFI implementations). Practical implementations often maintain three global sets of possible targets instead of one set per control-flow transfer: one target set each for indirect jumps, indirect calls, and function returns. The control-flow checks limit the transfers to addresses in this set. This policy is an improvement compared to unchecked control-flow transfers but overly permissive as an attacker can hijack control-flow to any entry in the set. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2022292029"
],
"abstract": [
"As existing defenses like ASLR, DEP, and stack cookies are not sufficient to stop determined attackers from exploiting our software, interest in Control Flow Integrity (CFI) is growing. In its ideal form, CFI prevents flows of control that were not intended by the original program, effectively putting a stop to exploitation based on return oriented programming (and many other attacks besides). Two main problems have prevented CFI from being deployed in practice. First, many CFI implementations require source code or debug information that is typically not available for commercial software. Second, in its ideal form, the technique is very expensive. It is for this reason that current research efforts focus on making CFI fast and practical. Specifically, much of the work on practical CFI is applicable to binaries, and improves performance by enforcing a looser notion of control flow integrity. In this paper, we examine the security implications of such looser notions of CFI: are they still able to prevent code reuse attacks, and if not, how hard is it to bypass its protection? Specifically, we show that with two new types of gadgets, return oriented programming is still possible. We assess the availability of our gadget sets, and demonstrate the practicality of these results with a practical exploit against Internet Explorer that bypasses modern CFI implementations."
]
} |
1407.0549 | 23255398 | Applications written in low-level languages without type or memory safety are especially prone to memory corruption. Attackers gain code execution capabilities through such applications despite all currently deployed defenses by exploiting memory corruption vulnerabilities. Control-Flow Integrity (CFI) is a promising defense mechanism that restricts open control-flow transfers to a static set of well-known locations. We present Lockdown, an approach to dynamic CFI that protects legacy, binary-only executables and libraries. Lockdown adaptively learns the control-flow graph of a running process using information from a trusted dynamic loader. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks. Our prototype implementation shows that dynamic CFI results in low performance overhead. | Lockdown is a dynamic approach that enforces a stricter, dynamically constructed control-flow graph on top of a dynamic sandbox for binaries. The sandbox ensures code integrity, adds a safe shadow stack that protects against return-oriented programming attacks @cite_33 , and enforces dynamic control-flow checks. The secure loader protects the GOT and GOT.PLT data structures from malicious modifications. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2162800072"
],
"abstract": [
"We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set."
]
} |
1407.0549 | 23255398 | Applications written in low-level languages without type or memory safety are especially prone to memory corruption. Attackers gain code execution capabilities through such applications despite all currently deployed defenses by exploiting memory corruption vulnerabilities. Control-Flow Integrity (CFI) is a promising defense mechanism that restricts open control-flow transfers to a static set of well-known locations. We present Lockdown, an approach to dynamic CFI that protects legacy, binary-only executables and libraries. Lockdown adaptively learns the control-flow graph of a running process using information from a trusted dynamic loader. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks. Our prototype implementation shows that dynamic CFI results in low performance overhead. | Several DBT systems exist with different performance characteristics. Valgrind @cite_43 and PIN @cite_4 offer a high-level runtime interface resulting in higher performance costs while DynamoRIO @cite_38 and libdetox @cite_45 support a more direct translation mechanism with low overhead translating application code on the granularity of basic blocks. We build on libdetox which has already been used to implement several security policies. | {
"cite_N": [
"@cite_43",
"@cite_45",
"@cite_4",
"@cite_38"
],
"mid": [
"2156858199",
"2105904466",
"2134633067",
"2161992906"
],
"abstract": [
"Dynamic binary instrumentation (DBI) frameworks make it easy to build dynamic binary analysis (DBA) tools such as checkers and profilers. Much of the focus on DBI frameworks has been on performance; little attention has been paid to their capabilities. As a result, we believe the potential of DBI has not been fully exploited. In this paper we describe Valgrind, a DBI framework designed for building heavyweight DBA tools. We focus on its unique support for shadow values-a powerful but previously little-studied and difficult-to-implement DBA technique, which requires a tool to shadow every register and memory value with another value that describes it. This support accounts for several crucial design features that distinguish Valgrind from other DBI frameworks. Because of these features, lightweight tools built with Valgrind run comparatively slowly, but Valgrind can be used to build more interesting, heavyweight tools that are difficult or impossible to build with other DBI frameworks such as Pin and DynamoRIO.",
"This paper presents an approach to the safe execution of applications based on software-based fault isolation and policy-based system call authorization. A running application is encapsulated in an additional layer of protection using dynamic binary translation in user-space. This virtualization layer dynamically recompiles the machine code and adds multiple dynamic security guards that verify the running code to protect and contain the application. The binary translation system redirects all system calls to a policy-based system call authorization framework. This interposition framework validates every system call based on the given arguments and the location of the system call. Depending on the user-loadable policy and an extensible handler mechanism the framework decides whether a system call is allowed, rejected, or redirect to a specific user-space handler in the virtualization layer. This paper offers an in-depth analysis of the different security guarantees and a performance analysis of libdetox, a prototype of the full protection platform. The combination of software-based fault isolation and policy-based system call authorization imposes only low overhead and is therefore an attractive option to encapsulate and sandbox applications to improve host security.",
"Robust and powerful software instrumentation tools are essential for program analysis tasks such as profiling, performance evaluation, and bug detection. To meet this need, we have developed a new instrumentation system called Pin. Our goals are to provide easy-to-use, portable, transparent, and efficient instrumentation. Instrumentation tools (called Pintools) are written in C C++ using Pin's rich API. Pin follows the model of ATOM, allowing the tool writer to analyze an application at the instruction level without the need for detailed knowledge of the underlying instruction set. The API is designed to be architecture independent whenever possible, making Pintools source compatible across different architectures. However, a Pintool can access architecture-specific details when necessary. Instrumentation with Pin is mostly transparent as the application and Pintool observe the application's original, uninstrumented behavior. Pin uses dynamic compilation to instrument executables while they are running. For efficiency, Pin uses several techniques, including inlining, register re-allocation, liveness analysis, and instruction scheduling to optimize instrumentation. This fully automated approach delivers significantly better instrumentation performance than similar tools. For example, Pin is 3.3x faster than Valgrind and 2x faster than DynamoRIO for basic-block counting. To illustrate Pin's versatility, we describe two Pintools in daily use to analyze production software. Pin is publicly available for Linux platforms on four architectures: IA32 (32-bit x86), EM64T (64-bit x86), Itanium®, and ARM. In the ten months since Pin 2 was released in July 2004, there have been over 3000 downloads from its website.",
"Dynamic optimization is emerging as a promising approach to overcome many of the obstacles of traditional static compilation. But while there are a number of compiler infrastructures for developing static optimizations, there are very few for developing dynamic optimizations. We present a framework for implementing dynamic analyses and optimizations. We provide an interface for building external modules, or clients, for the DynamoRIO dynamic code modification system. This interface abstracts away many low-level details of the DynamoRIO runtime system while exposing a simple and powerful, yet efficient and lightweight API. This is achieved by restricting optimization units to linear streams of code and using adaptive levels of detail for representing instructions. The interface is not restricted to optimization and can be used for instrumentation, profiling, dynamic translation, etc. To demonstrate the usefulness and effectiveness of our framework, we implemented several optimizations. These improve the performance of some applications by as much as 40 relative to native execution. The average speedup relative to base DynamoRIO performance is 12 ."
]
} |
1407.0549 | 23255398 | Applications written in low-level languages without type or memory safety are especially prone to memory corruption. Attackers gain code execution capabilities through such applications despite all currently deployed defenses by exploiting memory corruption vulnerabilities. Control-Flow Integrity (CFI) is a promising defense mechanism that restricts open control-flow transfers to a static set of well-known locations. We present Lockdown, an approach to dynamic CFI that protects legacy, binary-only executables and libraries. Lockdown adaptively learns the control-flow graph of a running process using information from a trusted dynamic loader. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks. Our prototype implementation shows that dynamic CFI results in low performance overhead. | A security policy can only be enforced if the translation system itself is secure. Libdetox splits the user-space address space into two domains: the application domain and the trusted binary translator domain. This design protects the binary translation system against an attacker that can modify the address space of the running application as the attacker cannot reach the trusted DBT domain. Libdetox uses a separate translator stack and separate memory regions from the running application. Libdetox enforces the following properties: (i) no untranslated code is ever executed; (ii) translated code is executed in the application domain; (iii) no pointer to the trusted domain is ever stored in attacker-accessible memory. The application triggers a trap into the trusted domain when (i) it executes a system call, (ii) executes untranslated code, or (iii) a heavy-weight security check is triggered. Libdetox can be extended by the addition of a trusted loader to the trusted computing domain @cite_25 thereby protecting the SFI system from attacks against the loader when loading or unloading shared libraries. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2117479921"
],
"abstract": [
"The standard loader (ld.so) is a common target of attacks. The loader is a trusted component of the application, and faults in the loader are problematic, e.g., they may lead to local privilege escalation for SUID binaries. Software-based fault isolation (SFI) provides a framework to execute arbitrary code while protecting the host system. A problem of current approaches to SFI is that fault isolation is decoupled from the dynamic loader, which is treated as a black box. The sandbox has no information about the (expected) execution behavior of the application and the connections between different shared objects. As a consequence, SFI is limited in its ability to identify devious application behavior. This paper presents a new approach to run untrusted code in a user-space sandbox. The approach replaces the standard loader with a security-aware trusted loader. The secure loader and the sandbox together cooperate to allow controlled execution of untrusted programs. A secure loader makes security a first class concept and ensures that the SFI system does not allow any unchecked code to be executed. The user-space sandbox builds on the secure loader and subsequently dynamically checks for malicious code and ensures that all control flow instructions of the application adhere to an execution model. The combination of the secure loader and the user-space sandbox enables the safe execution of untrusted code in user-space. Code injection attacks are stopped before any unintended code is executed. Furthermore, additional information provided by the loader can be used to support additional security properties, e.g., in lining of Procedure Linkage Table calls reduces the number of indirect control flow transfers and therefore limits jump-oriented attacks. This approach implements a secure platform for privileged applications and applications reachable over the network that anticipates and confines security threats from the beginning."
]
} |
1407.0549 | 23255398 | Applications written in low-level languages without type or memory safety are especially prone to memory corruption. Attackers gain code execution capabilities through such applications despite all currently deployed defenses by exploiting memory corruption vulnerabilities. Control-Flow Integrity (CFI) is a promising defense mechanism that restricts open control-flow transfers to a static set of well-known locations. We present Lockdown, an approach to dynamic CFI that protects legacy, binary-only executables and libraries. Lockdown adaptively learns the control-flow graph of a running process using information from a trusted dynamic loader. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks. Our prototype implementation shows that dynamic CFI results in low performance overhead. | The combination of trusted loader and dynamic binary translation system implements the following security guarantees: a protects the integrity of return instruction pointers on the stack at all times; the protects the data structures that are used to execute functions in other loaded libraries at runtime; and the of the security mechanism is guaranteed by the binary translation system. The shadow stack is implemented by translating call and return instructions @cite_45 . Translated call instructions push the return instruction pointer on both the application stack and the shadow stack in the trusted domain. Translated return instructions check the equivalence between the return instruction pointer on the application stack and the shadow stack; if the pointers are equivalent then control is transferred to the translated code block identified by the code pointer on the shadow stack. | {
"cite_N": [
"@cite_45"
],
"mid": [
"2105904466"
],
"abstract": [
"This paper presents an approach to the safe execution of applications based on software-based fault isolation and policy-based system call authorization. A running application is encapsulated in an additional layer of protection using dynamic binary translation in user-space. This virtualization layer dynamically recompiles the machine code and adds multiple dynamic security guards that verify the running code to protect and contain the application. The binary translation system redirects all system calls to a policy-based system call authorization framework. This interposition framework validates every system call based on the given arguments and the location of the system call. Depending on the user-loadable policy and an extensible handler mechanism the framework decides whether a system call is allowed, rejected, or redirect to a specific user-space handler in the virtualization layer. This paper offers an in-depth analysis of the different security guarantees and a performance analysis of libdetox, a prototype of the full protection platform. The combination of software-based fault isolation and policy-based system call authorization imposes only low overhead and is therefore an attractive option to encapsulate and sandbox applications to improve host security."
]
} |
1407.0549 | 23255398 | Applications written in low-level languages without type or memory safety are especially prone to memory corruption. Attackers gain code execution capabilities through such applications despite all currently deployed defenses by exploiting memory corruption vulnerabilities. Control-Flow Integrity (CFI) is a promising defense mechanism that restricts open control-flow transfers to a static set of well-known locations. We present Lockdown, an approach to dynamic CFI that protects legacy, binary-only executables and libraries. Lockdown adaptively learns the control-flow graph of a running process using information from a trusted dynamic loader. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks. Our prototype implementation shows that dynamic CFI results in low performance overhead. | Modern Unix operating systems and the Linux kernel use the Executable and Linkable Format @cite_30 @cite_0 (ELF) to specify the on-disk layout of applications, libraries, and compiled objects. A file that uses the ELF format to define its internal layout is called Dynamic Shared Object (DSO). The ELF format defines two views for each DSO. The first view is the program header that contains information about segments; the program header controls how the segments must be mapped from disk into the process image. The second view is the section header table; this table contains the more detailed section definitions. | {
"cite_N": [
"@cite_30",
"@cite_0"
],
"mid": [
"2021806553",
"2151829269"
],
"abstract": [
"From UNIX Labs--a comprehensive manual covering software installation, low-level system information, program loading, dynamic linking, libraries, formats, protocols, and system commands that comprise the binary interface for SVR4.",
"Today, shared libraries are ubiquitous. Developers use them for multiple reasons and create them just as they would create application code. This is a problem, though, since on many platforms some additional techniques must be applied even to generate decent code. Even more knowledge is needed to generate optimized code. This paper introduces the required rules and techniques. In addition, it introduces the concept of ABI (Application Binary Interface) stability and shows how to manage it. 1 Preface For a long time, programmers collected commonly used code in libraries so that code could be reused. This saves development time and reduces errors since reused code only has to be debugged once. With systems running dozens or hundreds of processes at the same time reuse of the code at link-time solves only part of the problem. Many processes will use the same pieces of code which they import for libraries. With the memory management systems in modern operating systems it is also possible to share the code at run-time. This is done by loading the code into physical memory only once and reusing it in multiple processes via virtual memory. Libraries of this kind are called shared libraries. The concept is not very new. Operating system designers implemented extensions to their system using the infrastructure they used before. The extension to the OS could be done transparently for the user. But the parts the user directly has to deal with created initially problems."
]
} |
1407.0566 | 1996060862 | In the context of a myriad of mobile apps which collect personally identifiable information (PII) and a prospective market place of personal data, we investigate a user-centric monetary valuation of mobile PII. During a 6-week long user study in a living lab deployment with 60 participants, we collected their daily valuations of 4 categories of mobile PII (communication, e.g. phonecalls made received, applications, e.g. time spent on different apps, location and media, e.g. photos taken) at three levels of complexity (individual data points, aggregated statistics and processed, i.e. meaningful interpretations of the data). In order to obtain honest valuations, we employ a reverse second price auction mechanism. Our findings show that the most sensitive and valued category of personal information is location. We report statistically significant associations between actual mobile usage, personal dispositions, and bidding behavior. Finally, we outline key implications for the design of mobile services and future markets of personal data. | In recent years, researchers have analyzed the factors that can influence a person's disclosure behavior and economic valuation of personal information. Demographic characteristics, such as gender and age, have been found to affect disclosure attitudes and behavior. Several studies have identified gender differences concerning privacy concerns and consequent information disclosure behaviors: for example, women are generally more protective of their online privacy @cite_22 @cite_18 . Age also plays a role in information disclosure behaviors: in a study on Facebook usage, Christofides @cite_38 found that adolescents disclose more information. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_22"
],
"mid": [
"2146427553",
"",
"2166213223"
],
"abstract": [
"People of all ages are increasingly exposed to online environments that encourage them to share and connect with others. However, there is a perception that adolescents are particularly susceptible to these cues and share more online than do other age groups. With a group of 288 adolescents and 285 adults, we explored differences and similarities in use of Facebook for information sharing and use of the controls to protect their privacy. Adolescents reported disclosing more information on Facebook and using the privacy settings less than adults. Despite these differences, the results indicated that adolescents and adults were more similar than different in the factors that predicted information disclosure and control. Adolescents spent more time on Facebook, which partially mediated the relationship between group (adolescents vs. adults) and disclosure. Self-esteem partially mediated the relationship between group and information control, with adults having higher self-esteem than adolescents.",
"",
"Individuals communicate and form relationships through Internet social networking websites such as Facebook and MySpace. We study risk taking, trust, and privacy concerns with regard to social networking websites among 205 college students using both reliable scales and behavior. Individuals with profiles on social networking websites have greater risk taking attitudes than those who do not; greater risk taking attitudes exist among men than women. Facebook has a greater sense of trust than MySpace. General privacy concerns and identity information disclosure concerns are of greater concern to women than men. Greater percentages of men than women display their phone numbers and home addresses on social networking websites. Social networking websites should inform potential users that risk taking and privacy concerns are potentially relevant and important concerns before individuals sign-up and create social networking websites."
]
} |
1407.0566 | 1996060862 | In the context of a myriad of mobile apps which collect personally identifiable information (PII) and a prospective market place of personal data, we investigate a user-centric monetary valuation of mobile PII. During a 6-week long user study in a living lab deployment with 60 participants, we collected their daily valuations of 4 categories of mobile PII (communication, e.g. phonecalls made received, applications, e.g. time spent on different apps, location and media, e.g. photos taken) at three levels of complexity (individual data points, aggregated statistics and processed, i.e. meaningful interpretations of the data). In order to obtain honest valuations, we employ a reverse second price auction mechanism. Our findings show that the most sensitive and valued category of personal information is location. We report statistically significant associations between actual mobile usage, personal dispositions, and bidding behavior. Finally, we outline key implications for the design of mobile services and future markets of personal data. | Prior work has also emphasized the role of an individual's stable psychological attributes - personality traits - to explain information disclosure behavior. Korzaan @cite_5 explored the role of the Big5 personality traits @cite_10 and found that Agreeableness -- defined as being sympathetic, straightforward and selfless, has a significant influence on individual concerns for information privacy. Junglas @cite_36 and Amichai-Hamburger and Vinitzky @cite_35 also used the Big5 personality traits and found that Agreeableness, Conscientiousness, and Openness affect a person's concerns for privacy. However, other studies targeting the influence of personality traits did not find significant correlations @cite_59 . More recently, Quercia @cite_33 found weak correlations among Openness to Experience and, to a lesser extent, Extraversion and the disclosure attitudes on Facebook. In 2010, Lo @cite_17 suggested that Locus of Control @cite_3 could affect an individual's perception of risk when disclosing personal information: internals are more likely than externals to feel that they can control the risk of becoming privacy victims, hence they are more willing to disclose their personal information @cite_23 . | {
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_36",
"@cite_3",
"@cite_59",
"@cite_23",
"@cite_5",
"@cite_10",
"@cite_17"
],
"mid": [
"2009505424",
"2163561184",
"2080575891",
"2004845716",
"2130164714",
"2003020611",
"2157539922",
"2480710602",
"63995487"
],
"abstract": [
"Studies have shown a connection between the individual personality of the user and the way he or she behaves on line. Today many millions of people around the world are connected by being members of various Internet social networks. (2009) studied the connection between the personality of the individual users and their behavior on a social network. They based their study on the self-reports of users of Facebook, one of the most popular social networks, and measured five personality factors using the NEO-PI-R (Costa & McCrae, 1992) questionnaire. They found that while there was a connection between the personalities of surfers and their behavior on Facebook, it was not strong. This study is based on that of (2009), but in our study the self-reports of subjects, were replaced by more objective criteria, measurements of the user-information upload on Facebook. A strong connection was found between personality and Facebook behavior. Implications of the results are discussed.",
"We study the relationship between Facebook popularity (number of contacts) and personality traits on a large number of subjects. We test to which extent two prevalent viewpoints hold. That is, popular users (those with many social contacts) are the ones whose personality traits either predict many offline (real world) friends or predict propensity to maintain superficial relationships. We find that the predictor for number of friends in the real world (Extraversion) is also a predictor for number of Facebook contacts. We then test whether people who have many social contacts on Facebook are the ones who are able to adapt themselves to new forms of communication, present themselves in likable ways, and have propensity to maintain superficial relationships. We show that there is no statistical evidence to support such a conjecture.",
"For more than a century, concern for privacy (CFP) has co-evolved with advances in information technology. The CFP refers to the anxious sense of interest that a person has because of various types of threats to the person's state of being free from intrusion. Research studies have validated this concept and identified its consequences. For example, research has shown that the CFP can have a negative influence on the adoption of information technology; but little is known about factors likely to influence such concern. This paper attempts to fill that gap. Because privacy is said to be a part of a more general ‘right to one's personality’, we consider the so-called ‘Big Five’ personality traits (agreeableness, extraversion, emotional stability, openness to experience, and conscientiousness) as factors that can influence privacy concerns. Protection motivation theory helps us to explain this influence in the context of an emerging pervasive technology: location-based services. Using a survey-based approach, we find that agreeableness, conscientiousness, and openness to experience each affect the CFP. These results have implications for the adoption, the design, and the marketing of highly personalized new technologies.",
"",
"Online communities of different types have become an important part of the daily internet life of many people within the last couple of years. Both research and business have shown interest in studying the possibilities and risks related to these relatively new phenomena. Frequently discussed aspects that are tightly bound to online communities are their implications and effects on privacy issues. Available literature has shown that users generally disclose very much (private) information on such communities, and different factors influencing this behaviour were identified and studied. However, the influence and predictive power of personality traits on information disclosure in online communities has not yet been the subject of analysis. In this paper we report the results of an online survey investigating the relations between personality traits (based on the Fife-Factor Model), usage patterns and information disclosure of participants in different types of online communities.",
"Despite concerns raised about the disclosure of personal information on social network sites, research has demonstrated that users continue to disclose personal information. The present study employs surveys and interviews to examine the factors that influence university students to disclose personal information on Facebook. Moreover, we study the strategies students have developed to protect themselves against privacy threats. The results show that personal network size was positively associated with information revelation, no association was found between concern about unwanted audiences and information revelation and finally, students' Internet privacy concerns and information revelation were negatively associated. The privacy protection strategies employed most often were the exclusion of personal information, the use of private email messages, and altering the default privacy settings. Based on our findings, we propose a model of information revelation and draw conclusions for theories of identity expression.",
"Privacy is the most often-cited criticism of ubiquitous computing, and may be the greatest barrier to its long-term success. However, developers currently have little support in designing software architectures and in creating interactions that are effective in helping end-users manage their privacy. To address this problem, we present Confab, a toolkit for facilitating the development of privacy-sensitive ubiquitous computing applications. The requirements for Confab were gathered through an analysis of privacy needs for both end-users and application developers. Confab provides basic support for building ubiquitous computing applications, providing a framework as well as several customizable privacy mechanisms. Confab also comes with extensions for managing location privacy. Combined, these features allow application developers and end-users to support a spectrum of trust levels and privacy needs.",
"",
"The social networking site (SNS) acts as a gateway through which online networking connections are made possible. Therefore, a user must be willing to provide his or her information to the SNS in order for others to find and “befriend” him or her and vice versa. Results from an online survey was used to test a trust-risk model of information disclosure in which two dispositional factors (Internet privacy concern and locus of control) and one situational factor (salience of SNS in daily life) were hypothesized to influence perceived risk regarding SNSs in general and trust in a SNS in particular. All proposed hypotheses were found significant, suggesting that the dispositional and situational factors are potentially salient in the SNS context. Findings also suggest that perhaps providing completed categories of personal information may be more sensitive than individual pieces of information alone."
]
} |
1407.0566 | 1996060862 | In the context of a myriad of mobile apps which collect personally identifiable information (PII) and a prospective market place of personal data, we investigate a user-centric monetary valuation of mobile PII. During a 6-week long user study in a living lab deployment with 60 participants, we collected their daily valuations of 4 categories of mobile PII (communication, e.g. phonecalls made received, applications, e.g. time spent on different apps, location and media, e.g. photos taken) at three levels of complexity (individual data points, aggregated statistics and processed, i.e. meaningful interpretations of the data). In order to obtain honest valuations, we employ a reverse second price auction mechanism. Our findings show that the most sensitive and valued category of personal information is location. We report statistically significant associations between actual mobile usage, personal dispositions, and bidding behavior. Finally, we outline key implications for the design of mobile services and future markets of personal data. | Individual differences are also found when providing economic valuations of personal data @cite_21 @cite_25 . For instance, some individuals may not be concerned about privacy and would allow access to their data in exchange for a few cents, whereas others may only consent if well paid. Recently, Aperjis and Huberman @cite_52 proposed to introduce a realistic market for personal data that pays individuals for their data while taking into account their own privacy and risk attitudes. | {
"cite_N": [
"@cite_52",
"@cite_21",
"@cite_25"
],
"mid": [
"2110870121",
"1967317786",
""
],
"abstract": [
"Since there is, in principle, no reason why third parties should not pay individuals for the use of their data, we introduce a realistic market that would allow these payments to be made while taking into account the privacy attitude of the participants. And since it is usually important to use unbiased samples to obtain credible statistical results, we examine the properties that such a market should have and suggest a mechanism that compensates those individuals that participate according to their risk attitudes. Equally important, we show that this mechanism also benefits buyers, as they pay less for the data than they would if they compensated all individuals with the same maximum fee that the most concerned ones expect.",
"AbstractUnderstanding the value that individuals assign to the protection of their personal data is of great importance for business, law, and public policy. We use a field experiment informed by behavioral economics and decision research to investigate individual privacy valuations and find evidence of endowment and order effects. Individuals assigned markedly different values to the privacy of their data depending on (1) whether they were asked to consider how much money they would accept to disclose otherwise private information or how much they would pay to protect otherwise public information and (2) the order in which they considered different offers for their data. The gap between such values is large compared with that observed in comparable studies of consumer goods. The results highlight the sensitivity of privacy valuations to contextual, nonnormative factors.",
""
]
} |
1407.0566 | 1996060862 | In the context of a myriad of mobile apps which collect personally identifiable information (PII) and a prospective market place of personal data, we investigate a user-centric monetary valuation of mobile PII. During a 6-week long user study in a living lab deployment with 60 participants, we collected their daily valuations of 4 categories of mobile PII (communication, e.g. phonecalls made received, applications, e.g. time spent on different apps, location and media, e.g. photos taken) at three levels of complexity (individual data points, aggregated statistics and processed, i.e. meaningful interpretations of the data). In order to obtain honest valuations, we employ a reverse second price auction mechanism. Our findings show that the most sensitive and valued category of personal information is location. We report statistically significant associations between actual mobile usage, personal dispositions, and bidding behavior. Finally, we outline key implications for the design of mobile services and future markets of personal data. | Previous research has shown that disclosure @cite_44 and valuation @cite_45 @cite_29 depend on the kind of information to be released. Huberman @cite_29 reported that the valuation of some types of personal information, such as the subject's weight and the subject's age depends on the desirability of these types of information in a social context. Some empirical studies have attempted to quantify subjective privacy valuations of personal information in different contexts, such as personal information revealed online @cite_58 , access to location data @cite_9 , or removal from marketers' call lists @cite_30 . These studies can be classified into two groups. The first and larger group includes studies that explicitly or implicitly measure the amount of money or benefit that a person considers to be enough to share her his personal data, namely their (WTA) giving away his her own data (see for example @cite_9 @cite_11 ). The second and smaller group includes studies about tangible prices or intangible costs consumers are (WTP) to protect their privacy (see for example, @cite_46 @cite_13 ). In our paper, we do not deal with WTA vs WTP, but we focus on WTA for PII captured by mobile phones (communications, apps and media usage, locations). | {
"cite_N": [
"@cite_30",
"@cite_9",
"@cite_29",
"@cite_44",
"@cite_45",
"@cite_46",
"@cite_58",
"@cite_13",
"@cite_11"
],
"mid": [
"2167317356",
"2155196221",
"",
"1986366638",
"1571429834",
"2097427281",
"2119486295",
"",
""
],
"abstract": [
"Data from do-not-call registries and other sources show discernible patterns in the demographics of consumers who signed up for do-not-call lists. Such patterns might also be useful in analyzing the prospects for a do-not-spam registry",
"This paper introduces results of a study into the value of location privacy for individuals using mobile devices. We questioned a sample of over 1200 people from five EU countries, and used tools from experimental psychology and economics to extract from them the value they attach to their location data. We compare this value across national groups, gender and technical awareness, but also the perceived difference between academic use and commercial exploitation. We provide some analysis of the self-selection bias of such a study, and look further at the valuation of location data over time using data from another experiment.",
"",
"In studies of people's privacy behavior, the extent of disclosure of personal information is typically measured as a summed total or a ratio of disclosure. In this paper, we evaluate three information disclosure datasets using a six-step statistical analysis, and show that people's disclosure behaviors are rather multidimensional: participants' disclosure of personal information breaks down into a number of distinct factors. Moreover, people can be classified along these dimensions into groups with different ''disclosure styles''. This difference is not merely in degree, but rather also in kind: one group may for instance disclose location-related but not interest-related items, whereas another group may behave exactly the other way around. We also found other significant differences between these groups, in terms of privacy attitudes, behaviors, and demographic characteristics. These might for instance allow an online system to classify its users into their respective privacy group, and to adapt its privacy practices to the disclosure style of this group. We discuss how our results provide relevant insights for a more user-centric approach to privacy and, more generally, advance our understanding of online privacy behavior.",
"We use techniques from experimental economics and psy- chology to determine how much compensation must be oered to per- suade someone to allow precise information about their location to be collected. We pretend that we are running a study that needs volunteers to have their location monitored (via their mobile phone) over a period of one month. Volunteers apply by specifying the amount of compen- sation which they would require to participate in the experiment. The experimental subjects are led to believe that we will run a sealed-bid second-price auction on these values, and thus we obtain an estimate of the value that users attach to their location data being used by third parties.",
"Traditional theory suggests consumers should be able to manage their privacy. Yet, empirical and theoretical research suggests that consumers often lack enough information to make privacy-sensitive decisions and, even with sufficient information, are likely to trade off long-term privacy for short-term benefits",
"The advent of the Internet has made the transmission of personally identifiable information more common and often unintended by the user. As personal information becomes more accessible, individuals worry that businesses misuse the information that is collected while they are online. Organizations have tried to mitigate this concern in two ways: (1) by offering privacy policies regarding the handling and use of personal information and (2) by offering benefits such as financial gains or convenience. In this paper, we interpret these actions in the context of the information-processing theory of motivation. Information-processing theories, also known as expectancy theories in the context of motivated behavior, are built on the premise that people process information about behavior-outcome relationships. By doing so, they are forming expectations and making decisions about what behavior to choose. Using an experimental setting, we empirically validate predictions that the means to mitigate privacy concerns are associated with positive valences resulting in an increase in motivational score. In a conjoint analysis exercise, 268 participants from the United States and Singapore face trade-off situations, where an organization may only offer incomplete privacy protection or some benefits. While privacy protections (against secondary use, improper access, and error) are associated with positive valences, we also find that financial gains and convenience can significantly increase individuals' motivational score of registering with a Web site. We find that benefits-monetary reward and future convenience-significantly affect individuals' preferences over Web sites with differing privacy policies. We also quantify the value of Web site privacy protection. Among U.S. subjects, protection against errors, improper access, and secondary use of personal information is worth @math 44.62. Finally, our approach also allows us to identify three distinct segments of Internet users-privacy guardians, information sellers, and convenience seekers.",
"",
""
]
} |
1407.0566 | 1996060862 | In the context of a myriad of mobile apps which collect personally identifiable information (PII) and a prospective market place of personal data, we investigate a user-centric monetary valuation of mobile PII. During a 6-week long user study in a living lab deployment with 60 participants, we collected their daily valuations of 4 categories of mobile PII (communication, e.g. phonecalls made received, applications, e.g. time spent on different apps, location and media, e.g. photos taken) at three levels of complexity (individual data points, aggregated statistics and processed, i.e. meaningful interpretations of the data). In order to obtain honest valuations, we employ a reverse second price auction mechanism. Our findings show that the most sensitive and valued category of personal information is location. We report statistically significant associations between actual mobile usage, personal dispositions, and bidding behavior. Finally, we outline key implications for the design of mobile services and future markets of personal data. | Building upon previous work, in this paper we investigate the monetary value that people assign to different kinds of PII as collected by their mobile phone, including location and communication patterns. In particular, we carry out a comprehensive 6-week long study in a living lab environment with 60 participants and adopt a Day Reconstruction Method @cite_50 and a reverse second price auction mechanism in order to poll and collect honest monetary valuations from our sample. | {
"cite_N": [
"@cite_50"
],
"mid": [
"1993933064"
],
"abstract": [
"The Day Reconstruction Method (DRM) assesses how people spend their time and how they experience the various activities and settings of their lives, combining features of time-budget measurement and experience sampling. Participants systematically reconstruct their activities and experiences of the preceding day with procedures designed to reduce recall biases. The DRM's utility is shown by documenting close correspondences between the DRM reports of 909 employed women and established results from experience sampling. An analysis of the hedonic treadmill shows the DRM's potential for well-being research."
]
} |
1406.7751 | 1990213891 | Online socio-technical systems can be studied as proxy of the real world to investigate human behavior and social interactions at scale. Here we focus on Instagram, a media-sharing online platform whose popularity has been rising up to gathering hundred millions users. Instagram exhibits a mixture of features including social structure, social tagging and media sharing. The network of social interactions among users models various dynamics including follower followee relations and users' communication by means of posts comments. Users can upload and tag media such as photos and pictures, and they can "like" and comment each piece of information on the platform. In this work we investigate three major aspects on our Instagram dataset: (i) the structural characteristics of its network of heterogeneous interactions, to unveil the emergence of self organization and topically-induced community structure; (ii) the dynamics of content production and consumption, to understand how global trends and popular users emerge; (iii) the behavior of users labeling media with tags, to determine how they devote their attention and to explore the variety of their topical interests. Our analysis provides clues to understand human behavior dynamics on socio-technical systems, specifically users and content popularity, the mechanisms of users' interactions in online environments and how collective trends emerge from individuals' topical interests. | In recent literature, social media and online communities have been used as proxy to study human communication and behavior at scale in different scenarios, including social protests or mobilizations, social influence and political interests and much more @cite_38 @cite_41 @cite_27 . Other research highlighted how trends emerge and diffuse in socio-technical systems, and how individuals' interacting in such environments devote their attention @cite_39 @cite_34 . | {
"cite_N": [
"@cite_38",
"@cite_41",
"@cite_39",
"@cite_27",
"@cite_34"
],
"mid": [
"2033198212",
"2028993035",
"2127492100",
"2165066692",
"2118110552"
],
"abstract": [
"Online social networks are everywhere. They must be influencing the way society is developing, but hard evidence is scarce. For instance, the relative effectiveness of online friendships and face-to-face friendships as drivers of social change is not known. In what may be the largest experiment ever conducted with human subjects, James Fowler and colleagues randomly assigned messages to 61 million Facebook users on Election Day in the United States in 2010, and tracked their behaviour both online and offline, using publicly available records. The results show that the messages influenced the political communication, information-seeking and voting behaviour of millions of people. Social messages had more impact than informational messages and 'weak ties' were much less likely than 'strong ties' to spread behaviour via the social network. Thus online mobilization works primarily through strong-tie networks that may exist offline but have an online representation.",
"Social movements rely in large measure on networked communication technologies to organize and disseminate information relating to the movements’ objectives. In this work we seek to understand how the goals and needs of a protest movement are reflected in the geographic patterns of its communication network, and how these patterns differ from those of stable political communication. To this end, we examine an online communication network reconstructed from over 600,000 tweets from a thirty-six week period covering the birth and maturation of the American anticapitalist movement, Occupy Wall Street. We find that, compared to a network of stable domestic political communication, the Occupy Wall Street network exhibits higher levels of locality and a hub and spoke structure, in which the majority of non-local attention is allocated to high-profile locations such as New York, California, and Washington D.C. Moreover, we observe that information flows across state boundaries are more likely to contain framing language and references to the media, while communication among individuals in the same state is more likely to reference protest action and specific places and times. Tying these results to social movement theory, we propose that these features reflect the movement’s efforts to mobilize resources at the local level and to develop narrative frames that reinforce collective purpose at the national level.",
"Tracking new topics, ideas, and \"memes\" across the Web has been an issue of considerable interest. Recent work has developed methods for tracking topic shifts over long time scales, as well as abrupt spikes in the appearance of particular named entities. However, these approaches are less well suited to the identification of content that spreads widely and then fades over time scales on the order of days - the time scale at which we perceive news and events. We develop a framework for tracking short, distinctive phrases that travel relatively intact through on-line text; developing scalable algorithms for clustering textual variants of such phrases, we identify a broad class of memes that exhibit wide spread and rich variation on a daily basis. As our principal domain of study, we show how such a meme-tracking approach can provide a coherent representation of the news cycle - the daily rhythms in the news media that have long been the subject of qualitative interpretation but have never been captured accurately enough to permit actual quantitative analysis. We tracked 1.6 million mainstream media sites and blogs over a period of three months with the total of 90 million articles and we find a set of novel and persistent temporal patterns in the news cycle. In particular, we observe a typical lag of 2.5 hours between the peaks of attention to a phrase in the news media and in blogs respectively, with divergent behavior around the overall peak and a \"heartbeat\"-like pattern in the handoff between news and blogs. We also develop and analyze a mathematical model for the kinds of temporal variation that the system exhibits.",
"We examine the temporal evolution of digital communication activity relating to the American anti-capitalist movement Occupy Wall Street. Using a high-volume sample from the microblogging site Twitter, we investigate changes in Occupy participant engagement, interests, and social connectivity over a fifteen month period starting three months prior to the movement's first protest action. The results of this analysis indicate that, on Twitter, the Occupy movement tended to elicit participation from a set of highly interconnected users with pre-existing interests in domestic politics and foreign social movements. These users, while highly vocal in the months immediately following the birth of the movement, appear to have lost interest in Occupy related communication over the remainder of the study period.",
"The identification of popular and important topics discussed in social networks is crucial for a better understanding of societ al concerns. It is also useful for users to stay on top of trends without having to sift through vast amounts of shared information. Trend detection methods introduced so far have not used the network topology and has thus not been able to distinguish viral topics from topics that are diffused mostly through the news media. To address this gap, we propose two novel structural trend definitions we call coordinated and uncoordinated trends that use friendship information to identify topics that are discussed among clustered and distributed users respectively. Our analyses and experiments show that structural trends are significantly different from traditional trends and provide new insights into the way people share information online. We also propose a sampling technique for structural trend detection and prove that the solution yields in a gain in efficiency and is within an acceptable error bound. Experiments performed on a Twitter data set of 41.7 million nodes and 417 million posts show that even with a sampling rate of 0.005, the average precision is 0.93 for coordinated trends and 1 for uncoordinated trends."
]
} |
1406.7639 | 2104378250 | We aim to reduce the social cost of congestion in many smart city applications. In our model of congestion, agents interact over limited resources after receiving signals from a central agent that observes the state of congestion in real time. Under natural models of agent populations, we develop new signalling schemes and show that by introducing a non-trivial amount of uncertainty in the signals, we reduce the social cost of congestion, i.e., improve social welfare. The signalling schemes are efficient in terms of both communication and computation, and are consistent with past observations of the congestion. Moreover, the resulting population dynamics converge under reasonable assumptions. | Our random signals are reminiscent of perturbation schemes in repeated games, such as Follow-the-Perturbed-Leader @cite_3 , trembling-hand equilibrium @cite_4 , stochastic fictitious play @cite_12 , or the power of two choices @cite_1 . When the agents use such randomised algorithms in their decision-making, the resulting demand process has been shown to be behave rather well in theory (e.g., @cite_1 ), as well as in a number of applications, e.g., in parking @cite_0 , bike sharing @cite_6 , and charging electric vehicles @cite_21 . However, we propose a signalling and guidance scheme that combines randomization and intervals. | {
"cite_N": [
"@cite_4",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_12"
],
"mid": [
"1998191601",
"2022224792",
"2117702591",
"",
"2132276257",
"",
"2093074200"
],
"abstract": [
"The concept of a perfect equilibrium point has been introduced in order to exclude the possibility that disequilibrium behavior is prescribed on unreached subgames. (Selten 1965 and 1973). Unfortunately this definition of perfectness does not remove all difficulties which may arise with respect to unreached parts of the game. It is necessary to reexamine the problem of defining a satisfactory non-cooperative equilibrium concept for games in extensive form. Therefore a new concept of a perfect equilibrium point will be introduced in this paper. In retrospect the earlier use of the word \"perfect\" was premature. Therefore a perfect equilibrium point in the old Sense will be called \"subgame perfect\". The new definition of perfectness has the property that a perfect equilibrium point is always subgame perfect but a subgame perfect equilibrium point may not be perfect. It will be shown that every finite extensive game with perfect recall has at least one perfect equilibrium point. Since subgame perfectness cannot be detected in the normal form, it is clear that for the purpose of the investigation of the problem of perfectness, the normal form is an inadequate representation of the extensive form. It will be convenient to introduce an \"agent normal form\" as a more adequate representation of games with perfect recall.",
"We present a new approach to regulate traffic-related pollution in urban environments by utilizing hybrid vehicles. To do this, we orchestrate the way that each vehicle in a large fleet combines its two engines based on simple communication signals from a central infrastructure. Our approach can be viewed both as a control algorithm and as an optimization algorithm. The primary goal is to regulate emissions, and we discuss a number of control strategies to achieve this goal. Second, we want to allocate the available pollution budget in a fair way among the participating vehicles; again, we explore several different notions of fairness that can be achieved. The efficacy of our approach is exemplified both by the construction of a proof-of-concept vehicle and by extensive simulations, and is verified by mathematical analysis.",
"We consider the following natural model: customers arrive as a Poisson stream of rate spl lambda n, spl lambda <1, at a collection of n servers. Each customer chooses some constant d servers independently and uniformly at random from the n servers and waits for service at the one with the fewest customers. Customers are served according to the first-in first-out (FIFO) protocol and the service time for a customer is exponentially distributed with mean 1. We call this problem the supermarket model. We wish to know how the system behaves and in particular we are interested in the effect that the parameter d has on the expected time a customer spends in the system in equilibrium. Our approach uses a limiting, deterministic model representing the behavior as n spl rarr spl infin to approximate the behavior of finite systems. The analysis of the deterministic model is interesting in its own right. Along with a theoretical justification of this approach, we provide simulations that demonstrate that the method accurately predicts system behavior, even for relatively small systems. Our analysis provides surprising implications. Having d=2 choices leads to exponential improvements in the expected time a customer spends in the system over d=1, whereas having d=3 choices is only a constant factor better than d=2. We discuss the possible implications for system design.",
"",
"We study the effect of customer choices in bicycle-sharing systems based on bicycle availability predictions. We show that such systems may lead to flapping behavior between bicycle stations. The consequences of flapping instability include poor user experience and suboptimal usage of the available bicycle stock. We propose a simple assignment strategy aimed at eliminating flapping and balancing demand at each station based on actual availability.",
"",
"Equilibrium points in mixed strategies seem to be unstable, because any player can deviate without penalty from his equilibrium strategy even if he expects all other players to stick to theirs. This paper proposes a model under which most mixed-strategy equilibrium points have full stability. It is argued that for any gameΓ the players' uncertainty about the other players' exact payoffs can be modeled as a disturbed gameΓ*, i.e., as a game with small random fluctuations in the payoffs. Any equilibrium point inΓ, whether it is in pure or in mixed strategies, can “almost always” be obtained as a limit of a pure-strategy equilibrium point in the corresponding disturbed gameΓ* when all disturbances go to zero. Accordingly, mixed-strategy equilibrium points are stable — even though the players may make no deliberate effort to use their pure strategies with the probability weights prescribed by their mixed equilibrium strategies — because the random fluctuations in their payoffs willmake them use their pure strategies approximately with the prescribed probabilities."
]
} |
1406.7639 | 2104378250 | We aim to reduce the social cost of congestion in many smart city applications. In our model of congestion, agents interact over limited resources after receiving signals from a central agent that observes the state of congestion in real time. Under natural models of agent populations, we develop new signalling schemes and show that by introducing a non-trivial amount of uncertainty in the signals, we reduce the social cost of congestion, i.e., improve social welfare. The signalling schemes are efficient in terms of both communication and computation, and are consistent with past observations of the congestion. Moreover, the resulting population dynamics converge under reasonable assumptions. | In the transportation literature, @cite_17 introduces the notion of equilibrium as the limit of the congestion distribution if it exists. @cite_5 considers a number of notions of noisy signals and studies greedy policies and equilibria. Our approach is also related to signalling of parking space availability @cite_2 . | {
"cite_N": [
"@cite_5",
"@cite_2",
"@cite_17"
],
"mid": [
"2032711141",
"1976843608",
"1981267404"
],
"abstract": [
"Most research and applications of network equilibrium models are based on the assumption that traffic volumes on roadways are virtually certain to be at or near their equilibrium values if the equilibrium volumes exist and are unique. However, it has long been known that this assumption can be violated in deterministic models. This paper presents an investigation of the stability of stochastic equilibrium in a two-link network. The stability of deterministic equilibrium also is discussed briefly. Equilibrium is defined to be stable if it is unique and the link volumes converge over time to their equilibrium values regardless of the initial conditions. Three models of route choice decision-making over time are formulated, and the stability of equilibrium is investigated for each. It is shown that even when equilibrium is unique, link volumes may converge to their equilibrium values, oscillate about equilibrium perpetually, or converge to values that may be considerably different from the equilibrium ones, depending on the details of the route choice decision-making process. Moreover, even when convergence of link volumes to equilibrium is assured, the convergence may be too slow to justify the standard assumption that these volumes are usually at or near their equilibrium values. When link volumes converge to non-equilibrium values, the levels at which the volumes stabilize typically depend on the initial link volumes or perceptions of travel costs. Conditions sufficient to assure convergence to equilibrium in two of the three models of route choice decision-making are presented, and these conditions are interpreted in terms of the route choice decision-making process.",
"This paper introduces and illustrates some novel stochastic policies that assign parking spaces to cars looking for an available parking space. We analyze in detail both the main features of a single park, i.e., how a car could conveniently decide whether to try its luck at that parking lot or try elsewhere, and the case when more parking lots are available, and how to choose the best one. We discuss the practical requirements of the proposed strategies in terms of infrastructure technology and vehicles' equipment and the mathematical properties of the proposed algorithms in terms of robustness against delays, stability, and reliability. Preliminary results obtained from simulations are also provided to illustrate the feasibility and the potential of our stochastic assignment policies.",
"The paper formulates link-flow definitions of equilibrium and stability, and gives conditions which guarantee the existence, uniqueness and stability of traffic equilibria. Junction delays in towns usually depend on the traffic flow along intersecting links; the theory presented here is designed to be applicable when there are such junction interactions."
]
} |
1406.7639 | 2104378250 | We aim to reduce the social cost of congestion in many smart city applications. In our model of congestion, agents interact over limited resources after receiving signals from a central agent that observes the state of congestion in real time. Under natural models of agent populations, we develop new signalling schemes and show that by introducing a non-trivial amount of uncertainty in the signals, we reduce the social cost of congestion, i.e., improve social welfare. The signalling schemes are efficient in terms of both communication and computation, and are consistent with past observations of the congestion. Moreover, the resulting population dynamics converge under reasonable assumptions. | Our interval signalling scheme is reminiscent of the equilibrium outcome of @cite_8 in the context of signalling games in economics (cf. @cite_20 for an up-to-date survey). However, we consider the problem of optimal signalling in a dynamic system, whereas @cite_8 considers Nash equilibrium signalling in single-shot games. For signalling games, @cite_9 shows that more information does not generally improve the equilibrium welfare of agents. Their notion of information,'' due to @cite_7 , is however very different from ours. | {
"cite_N": [
"@cite_9",
"@cite_7",
"@cite_20",
"@cite_8"
],
"mid": [
"2169727571",
"1666623353",
"2505817254",
"2160003140"
],
"abstract": [
"We consider a statistical decision problem faced by a two player organization whose members may not agree on outcome evaluations and prior probabilities. One player is specialized in gathering information and transmitting it to the other, who takes the decision. This process is modeled as a game. Qualitative properties of the equilibria are analyzed. The impact of improving the quality of available information on the equilibrium welfares of the two individuals is studied. Better information generally may not improve welfare. We give conditions under which it will.",
"1. Summary Bohnenblust, Shapley, and Sherman [2] have introduced a method of comparing two sampling procedures or experiments; essentially their concept is that one experiment a is more informative than a second experiment ,, a v ,S, if, for every possible risk function, any risk attainable with , is also attainable with a. If a is a sufficient statistic for a procedure equivalent to ,S, a >,, it is shown that a v j3. In the case of dichotomies, the converse is proved. Whether > and v are equivalent in general is not known. Various properties of > and n are obtained, such as the following: if a > , and y is independent of both, then the combination (a, -y) > (#, y). An application to a problem in 2 X 2 tables is discussed.",
"This paper reviews literature on communication between informed experts and uninformed decision makers. The research provides some insight into what constitutes a persuasive statement and under what conditions a decision maker will benefit from consulting an expert. I classify the literature along four dimensions: strategic, technological, institutional, and cultural. To the extent that decision makers and experts have different preferences, communication creates strategic problems. Technological considerations describe the domain of uncertainty, the cost of acquiring information, and the cost of manipulating information. The institution determines who has responsibility for making decisions and the rules that govern communication. Cultural factors describe the way in which agents interpret language.",
"This paper develops a model of strategic communication, in which a better-informed Sender (S) sends a possibly noisy signal to a Receiver (R), who then takes an action that determines the welfare of both. We characterize the set of Bayesian Nash equilibria under standard assumptions, and show that equilibrium signaling always takes a strikingly simple form, in which S partitions the support of the (scalar) variable that represents his private information and introduces noise into his signal by reporting, in effect, only which element of the partition his observation actually lies in. We show under further assumptions that before S observes his private information, the equilibrium whose partition has the greatest number of elements is Pareto-superior to all other equilibria, and that if agents coordinate on this equilibrium, R's equilibrium expected utility rises when agents' preferences become more similar. Since R bases his choice of action on rational expectations, this establishes a sense in which equilibrium signaling is more informative when agents' preferences are more similar."
]
} |
1406.7639 | 2104378250 | We aim to reduce the social cost of congestion in many smart city applications. In our model of congestion, agents interact over limited resources after receiving signals from a central agent that observes the state of congestion in real time. Under natural models of agent populations, we develop new signalling schemes and show that by introducing a non-trivial amount of uncertainty in the signals, we reduce the social cost of congestion, i.e., improve social welfare. The signalling schemes are efficient in terms of both communication and computation, and are consistent with past observations of the congestion. Moreover, the resulting population dynamics converge under reasonable assumptions. | Finally, let us note that there is also a related draft @cite_13 by the present authors, which explores the notion of @math -extreme interval signalling and optimisation over truthful interval signals. | {
"cite_N": [
"@cite_13"
],
"mid": [
"1898644197"
],
"abstract": [
"We consider a multi-agent system where, at every time instant, many users choose to use one of multiple resources, whose performance depends on the number of concurrent users. In addition, a central agent has up-to-date knowledge of the congestion across all resources. What signals should the central agent broadcast to the individual users in order to reduce the total congestion? We model users with varying levels of aversion to risk, and a heterogeneous population of users over these levels. We consider signaling schemes that communicate for each resource an interval of possible congestion values, instead of scalar values. We show how to optimise over these intervals with respect to the social cost, under constraints that these intervals be consistent with past observations. Moreover, under mild assumptions, the resulting distribution of users across resources converges."
]
} |
1406.6130 | 2953108712 | Mixability is a property of a loss which characterizes when fast convergence is possible in the game of prediction with expert advice. We show that a key property of mixability generalizes, and the exp and log operations present in the usual theory are not as special as one might have thought. In doing this we introduce a more general notion of @math -mixability where @math is a general entropy ( , any convex function on probabilities). We show how a property shared by the convex dual of any such entropy yields a natural algorithm (the minimizer of a regret bound) which, analogous to the classical aggregating algorithm, is guaranteed a constant regret when used with @math -mixable losses. We characterize precisely which @math have @math -mixable losses and put forward a number of conjectures about the optimality and relationships between different choices of entropy. | The starting point for mixability and the aggregating algorithm is the work of @cite_1 @cite_7 . The general setting of prediction with expert advice is summarized in [Chapters 2 and 3] Cesa-Bianchi:2006 . There one can find a range of results that study different aggregation schemes and different assumptions on the losses (exp-concave, mixable). Variants of the aggregating algorithm have been studied for classically mixable losses, with a trade-off between tightness of the bound (in a constant factor) and the computational complexity @cite_20 . Weakly mixable losses are a generalization of mixable losses. They have been studied in @cite_8 where it is shown there exists a variant of the aggregating algorithm that achieves regret @math for some constant @math . Vovk [in .2] Vovk:2001 makes the observation that his Aggregating Algorithm reduces to Bayesian mixtures in the case of the log loss game. See also the discussion in [page 330] Cesa-Bianchi:2006 relating certain aggregation schemes to Bayesian updating. | {
"cite_N": [
"@cite_20",
"@cite_1",
"@cite_7",
"@cite_8"
],
"mid": [
"1495813543",
"1506313179",
"",
"2059590802"
],
"abstract": [
"We consider algorithms for combining advice from a set of experts. In each trial, the algorithm receives the predictions of the experts and produces its own prediction. A loss function is applied to measure the discrepancy between the predictions and actual observations. The algorithm keeps a weight for each expert. At each trial the weights are first used to help produce the prediction and then updated according to the observed outcome. Our starting point is Vovk's Aggregating Algorithm, in which the weights have a simple form: the weight of an expert decreases exponentially as a function of the loss incurred by the expert. The prediction of the Aggregating Algorithm is typically a non-linear function of the weights and the experts' predictions. We analyze here a simplified algorithm in which the weights are as in the original Aggregating Algorithm, but the prediction is simply the weighted average of the experts' predictions. We show that for a large class of loss functions, even with the simplified prediction rule the additional loss of the algorithm over the loss of the best expert is at most c ln n, where n is the number of experts and c a constant that depends on the loss function. Thus, the bound is of the same form as the known bounds for the Aggregating Algorithm, although the constants here are not quite as good. We use relative entropy to rewrite the bounds in a stronger form and to motivate the update.",
"We consider on-line density estimation with a parameterized density from the exponential family. The on-line algorithm receives one example at a time and maintains a parameter that is essentially an average of the past examples. After receiving an example the algorithm incurs a loss, which is the negative log-likelihood of the example with respect to the current parameter of the algorithm. An off-line algorithm can choose the best parameter based on all the examples. We prove bounds on the additional total loss of the on-line algorithm over the total loss of the best off-line parameter. These relative loss bounds hold for an arbitrary sequence of examples. The goal is to design algorithms with the best possible relative loss bounds. We use a Bregman divergence to derive and analyze each algorithm. These divergences are relative entropies between two exponential distributions. We also use our methods to prove relative loss bounds for linear regression.",
"",
"This paper resolves the problem of predicting as well as the best expert up to an additive term of the order o(n), where n is the length of a sequence of letters from a finite alphabet. We call the games that permit this weakly mixable and give a geometrical characterisation of the class of weakly mixable games. Weak mixability turns out to be equivalent to convexity of the finite part of the set of superpredictions. For bounded games we introduce the Weak Aggregating Algorithm that allows us to obtain additive terms of the form Cn."
]
} |
1406.6130 | 2953108712 | Mixability is a property of a loss which characterizes when fast convergence is possible in the game of prediction with expert advice. We show that a key property of mixability generalizes, and the exp and log operations present in the usual theory are not as special as one might have thought. In doing this we introduce a more general notion of @math -mixability where @math is a general entropy ( , any convex function on probabilities). We show how a property shared by the convex dual of any such entropy yields a natural algorithm (the minimizer of a regret bound) which, analogous to the classical aggregating algorithm, is guaranteed a constant regret when used with @math -mixable losses. We characterize precisely which @math have @math -mixable losses and put forward a number of conjectures about the optimality and relationships between different choices of entropy. | The general form of updating we propose is similar to that considered by Kivinen and Warmuth @cite_4 who consider finding a vector @math minimizing ( d(w,s) + L(y_t, w x_t) ) where @math is some starting vector, @math is the instance label observation at round @math and @math is a loss. The key difference between their formulation and ours is that our loss term is (in their notation) @math -- , the linear combination of the losses of the @math at @math and not the loss of their inner product. Online methods of density estimation for exponential families are discussed in [ ] Azoury:2001 where the authors compare the online and offline updates of the same sequence and make heavy use of the relationship between the KL divergence between members of an exponential family and an associated Bregman divergence between the parameters of those members. The analysis of mirror descent @cite_6 shows that it achieves constant regret when the entropic regularizer is used. However, there is no consideration regarding whether similar results extend to other entropies defined on the simplex. | {
"cite_N": [
"@cite_4",
"@cite_6"
],
"mid": [
"2069317438",
"2016384870"
],
"abstract": [
"We consider two algorithm for on-line prediction based on a linear model. The algorithms are the well-known Gradient Descent (GD) algorithm and a new algorithm, which we call EG(+ -). They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG(+ -) algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG(+ -) and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG(+ -) has a much smaller loss if only a few components of the input are relevant for the predictions. We have performed experiments, which show that our worst-case upper bounds are quite tight already on simple artificial data.",
"The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an efficiency estimate that is mildly dependent in the decision variables dimension, and thus suitable for solving very large scale optimization problems. We present a new derivation and analysis of this algorithm. We show that the MDA can be viewed as a nonlinear projected-subgradient type method, derived from using a general distance-like function instead of the usual Euclidean squared distance. Within this interpretation, we derive in a simple way convergence and efficiency estimates. We then propose an Entropic mirror descent algorithm for convex minimization over the unit simplex, with a global efficiency estimate proven to be mildly dependent in the dimension of the problem."
]
} |
1406.5949 | 2952894106 | In wireless networks relay nodes can be used to assist the users' transmissions to reach their destination. Work on relay cooperation, from a physical layer perspective, has up to now yielded well-known results. This paper takes a different stance focusing on network-level cooperation. Extending previous results for a single relay, we investigate here the benefits from the deployment of a second one. We assume that the two relays do not generate packets of their own and the system employs random access to the medium; we further consider slotted time and that the users have saturated queues. We obtain analytical expressions for the arrival and service rates of the queues of the two relays and the stability conditions. We investigate a model of the system, in which the users are divided into clusters, each being served by one relay, and show its advantages in terms of aggregate and throughput per user. We quantify the above, analytically for the case of the collision channel and through simulations for the case of Multi-Packet Reception (MPR), and we provide insight on when the deployment of a second relay in the system can yield significant advantages. | The notion of cooperative communications was introduced by information theory with the relay channel. The relay channel is the basic building block for the implementation of cooperative communications, which are widely acknowledged to provide higher communication rates and reliability in a wireless network with time varying channels @cite_29 . It was initially proposed by van der Meulen @cite_3 , and its first information-theoretic characterizations were presented in @cite_21 . | {
"cite_N": [
"@cite_29",
"@cite_21",
"@cite_3"
],
"mid": [
"2152121970",
"2167447263",
"1971536606"
],
"abstract": [
"We develop and analyze low-complexity cooperative diversity protocols that combat fading induced by multipath propagation in wireless networks. The underlying techniques exploit space diversity available through cooperating terminals' relaying signals for one another. We outline several strategies employed by the cooperating radios, including fixed relaying schemes such as amplify-and-forward and decode-and-forward, selection relaying schemes that adapt based upon channel measurements between the cooperating terminals, and incremental relaying schemes that adapt based upon limited feedback from the destination terminal. We develop performance characterizations in terms of outage events and associated outage probabilities, which measure robustness of the transmissions to fading, focusing on the high signal-to-noise ratio (SNR) regime. Except for fixed decode-and-forward, all of our cooperative diversity protocols are efficient in the sense that they achieve full diversity (i.e., second-order diversity in the case of two terminals), and, moreover, are close to optimum (within 1.5 dB) in certain regimes. Thus, using distributed antennas, we can provide the powerful benefits of space diversity without need for physical arrays, though at a loss of spectral efficiency due to half-duplex operation and possibly at the cost of additional receive hardware. Applicable to any wireless setting, including cellular or ad hoc networks-wherever space constraints preclude the use of physical arrays-the performance characterizations reveal that large power or energy savings result from the use of these protocols.",
"A relay channel consists of an input x_ l , a relay output y_ 1 , a channel output y , and a relay sender x_ 2 (whose transmission is allowed to depend on the past symbols y_ 1 . The dependence of the received symbols upon the inputs is given by p(y,y_ 1 |x_ 1 ,x_ 2 ) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_ 1 , then C : = : !_ p(x_ 1 ,x_ 2 ) , I(X_ 1 ,X_ 2 ;Y), I(X_ 1 ; Y_ 1 |X_ 2 ) . 2)If y_ 1 is a degraded form of y , then C : = : !_ p(x_ 1 ) x_ 2 I(X_ 1 ;Y|x_ 2 ) . 3)If p(y,y_ 1 |x_ 1 ,x_ 2 ) is an arbitrary relay channel with feedback from (y,y_ 1 ) to both x_ 1 x_ 2 , then C : = : p(x_ 1 ,x_ 2 ) , I(X_ 1 ,X_ 2 ;Y),I ,(X_ 1 ;Y,Y_ 1 |X_ 2 ) . 4)For a general relay channel, C : : p(x_ 1 ,x_ 2 ) , I ,(X_ 1 , X_ 2 ;Y),I(X_ 1 ;Y,Y_ 1 |X_ 2 ) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established.",
"Summary The problem of transmitting information in a specified direction over a communication channel with three terminals is considered. Examples are given of the various ways of sending information. Basic inequalities for average mutual information rates are obtained. A coding theorem and weak converse are proved and a necessary and sufficient condition for a positive capacity is derived. Upper and lower bounds on the capacity are obtained, which coincide for channels with symmetric structure."
]
} |
1406.5949 | 2952894106 | In wireless networks relay nodes can be used to assist the users' transmissions to reach their destination. Work on relay cooperation, from a physical layer perspective, has up to now yielded well-known results. This paper takes a different stance focusing on network-level cooperation. Extending previous results for a single relay, we investigate here the benefits from the deployment of a second one. We assume that the two relays do not generate packets of their own and the system employs random access to the medium; we further consider slotted time and that the users have saturated queues. We obtain analytical expressions for the arrival and service rates of the queues of the two relays and the stability conditions. We investigate a model of the system, in which the users are divided into clusters, each being served by one relay, and show its advantages in terms of aggregate and throughput per user. We quantify the above, analytically for the case of the collision channel and through simulations for the case of Multi-Packet Reception (MPR), and we provide insight on when the deployment of a second relay in the system can yield significant advantages. | Recently, the study of the relay channel has gained significant interest in the wireless communications community. @cite_35 for the classic relay channel a protocol is presented for selection of reception and transmission time slots adaptively and based on the quality of the involved links. Considering full-duplex and half-duplex relaying @cite_23 shows that if the numbers of antennas at source and destination are equal to or larger than the number of antennas at the relay, half-duplex relaying can achieve in some cases higher throughput than ideal full-duplex relaying. With beamforming and taking inter-relay interference @cite_12 proposes two buffer-aided relay selection schemes. Interference cancellation is employed in @cite_24 to allow opportunistic relaying selection maximising the average capacity of the network. For a practical system, OFDMA based cellular resource allocation schemes are proposed in @cite_31 for multiple relay stations (RS) with adaptive RS activation. | {
"cite_N": [
"@cite_35",
"@cite_24",
"@cite_23",
"@cite_31",
"@cite_12"
],
"mid": [
"2013931105",
"2046388897",
"1985276827",
"2161036183",
"1968199584"
],
"abstract": [
"We propose a buffer-aided relaying protocol for a three node relay network comprised of a source, a half-duplex relay with buffer, and a destination. We assume a direct source-destination link is available and all links undergo fading. The proposed protocol enables the half-duplex relay to choose its reception and transmission time slots adaptively and based on the quality of the involved links. We derive the achievable ergodic rate of the considered three-node network for the proposed protocol. Our results show that this achievable ergodic rate exceeds existing unachievable ergodic capacity upper bounds for the three-node half-duplex relay channel with the relay always alternating between reception and transmission in successive time slots.",
"In this paper we consider a simple cooperative network consisting of a source, a destination and a cluster of decode-and-forward relays characterized by the half-duplex constraint. At each time-slot the source and (possibly) one of the relays transmit a packet to another relay and the destination, respectively. When the source and a relay transmit simultaneously, inter-relay interference is introduced at the receiving relay. In this work, with the aid of buffers at the relays, we mitigate the detrimental effect of inter-relay interference through either interference cancellation or mitigation. More specifically, we propose the min-power opportunistic relaying protocol that minimizes the total energy expenditure per time slot under an inter-relay interference cancellation scheme. The min-power relay-pair selection scheme, apart from minimizing the energy expenditure, also provides better throughput and lower outage probability than existing works in the literature. The performance of the proposed scheme is demonstrated via illustrative examples and simulations in terms of outage probability and average throughput.",
"We consider a three-node relay network comprised of a source, a relay, and a destination, and compare the throughputs of ideal full-duplex (FD) and half-duplex (HD) relaying. For FD relaying, we assume that the relay uses half of its antennas for reception and the other half for transmission. In contrast, for HD relaying, the relay uses all of its antennas for either reception or transmission and a buffer for data storage. Our results show that if the numbers of antennas at source and destination are equal to or larger than the number of antennas at the relay, HD relaying may achieve a similar, and in some cases even higher, throughput compared to ideal FD relaying.",
"Radio resource allocation for downlink transmissions in a cellular system based on orthogonal frequency-division multiple access (OFDMA) has been the subject of many research studies over the past few years. Nowadays, increasing attention has turned to cellular systems with relays, but only a few algorithms have been designed for OFDMA-based relay systems. In this paper, resource-allocation schemes are proposed for the cases of one relay station (RS) and multiple RSs in the cell. Due to the specific design of these algorithms, which operate in an RS-aided base station centralized manner, the amount of required channel-state information and algorithm complexity are minimized, making them suited for practical use. The idea of adaptive RS activation is introduced, where the frame structure is adapted depending on the active RS. Different numbers of relays are considered, and the corresponding algorithms are accordingly adapted. Simulation results show that our algorithms achieve a very good throughput outage tradeoff.",
"In this paper, we study virtual full-duplex (FD) buffer-aided relaying to recover the multiplexing loss of half-duplex (HD) relaying in a network with multiple buffer-aided relays, each of which has multiple antennas, through opportunistic relay selection and beamforming. The main idea of virtual FD buffer-aided relaying is that a source and a relay simultaneously transmit their own information to another relay and a destination, respectively. In this network, inter-relay interference (IRI) is a crucial problem which has to be resolved like self-interference in the FD relaying. In contrast to previous work that neglected the IRI, we propose two buffer-aided relay selection and beam-forming schemes taking the IRI into consideration. Numerical results show that our proposed relay selection scheme with zero-forcing beamforming (ZFB)-based IRI cancellation approaches the average end-to-end capacity of IRI-free upper bound as the numbers of relays and antennas increase."
]
} |
1406.6163 | 2951369600 | We developed a Functional object-oriented Parallel framework (FooPar) for high-level high-performance computing in Scala. Central to this framework are Distributed Memory Parallel Data structures (DPDs), i.e., collections of data distributed in a shared nothing system together with parallel operations on these data. In this paper, we first present FooPar's architecture and the idea of DPDs and group communications. Then, we show how DPDs can be implemented elegantly and efficiently in Scala based on the Traversable Builder pattern, unifying Functional and Object-Oriented Programming. We prove the correctness and safety of one communication algorithm and show how specification testing (via ScalaCheck) can be used to bridge the gap between proof and implementation. Furthermore, we show that the group communication operations of FooPar outperform those of the MPJ Express open source MPI-bindings for Java, both asymptotically and empirically. FooPar has already been shown to be capable of achieving close-to-optimal performance for dense matrix-matrix multiplication via JNI. In this article, we present results on a parallel implementation of the Floyd-Warshall algorithm in FooPar, achieving more than 94 efficiency compared to the serial version on a cluster using 100 cores for matrices of dimension 38000 x 38000. | While complements the @cite_18 introduced in Scala 2.8, it is not meant as an extension. This is due to multiple reasons. First, the parallel collections use workload-splitting strategies leading to communication bottle-necks in distributed memory settings. Second, they employ an implicit master-slave paradigm unsuitable for massively distributed HPC. Third, the SPMD paradigm requires launching multiple copies of the process as opposed to branching internally into threads. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2104959526"
],
"abstract": [
"Most applications manipulate structured data. Modern languages and platforms provide collection frameworks with basic data structures like lists, hashtables and trees. These data structures have a range of predefined operations which include mapping, filtering or finding elements. Such bulk operations traverse the collection and process the elements sequentially. Their implementation relies on iterators, which are not applicable to parallel operations due to their sequential nature. We present an approach to parallelizing collection operations in a generic way, used to factor out common parallel operations in collection libraries. Our framework is easy to use and straightforward to extend to new collections. We show how to implement concrete parallel collections such as parallel arrays and parallel hash maps, proposing an efficient solution to parallel hash map construction. Finally, we give benchmarks showing the performance of parallel collection operations."
]
} |
1406.6163 | 2951369600 | We developed a Functional object-oriented Parallel framework (FooPar) for high-level high-performance computing in Scala. Central to this framework are Distributed Memory Parallel Data structures (DPDs), i.e., collections of data distributed in a shared nothing system together with parallel operations on these data. In this paper, we first present FooPar's architecture and the idea of DPDs and group communications. Then, we show how DPDs can be implemented elegantly and efficiently in Scala based on the Traversable Builder pattern, unifying Functional and Object-Oriented Programming. We prove the correctness and safety of one communication algorithm and show how specification testing (via ScalaCheck) can be used to bridge the gap between proof and implementation. Furthermore, we show that the group communication operations of FooPar outperform those of the MPJ Express open source MPI-bindings for Java, both asymptotically and empirically. FooPar has already been shown to be capable of achieving close-to-optimal performance for dense matrix-matrix multiplication via JNI. In this article, we present results on a parallel implementation of the Floyd-Warshall algorithm in FooPar, achieving more than 94 efficiency compared to the serial version on a cluster using 100 cores for matrices of dimension 38000 x 38000. | differs from other functional programming frameworks for parallel computations in some key aspects. While frameworks like Eden @cite_0 , Spark @cite_29 , and Scala's own parallel collections @cite_18 try to maximize the level of abstraction, this is mostly done through strategies for data-partitioning and distribution which in turn introduce network and computation bottlenecks. Furthermore, these tools lend themselves poorly to parallel runtime analysis hindering asymptotic guarantees that might otherwise be achieved. To unaware users, automagic'' parallel programming can easily lead to decreased performance due to added overhead and small workloads. With this in mind, aims at the sweetspot between high performance computing and highly abstract, maintainable and analyzable programming. This is achieved by focusing on user-defined workload distribution and deemphasizing fault tolerance. In this way, the performance pitfalls of both and the paradigm can be avoided and can provide HPC parallelism with the conciseness, efficiency and generality expected from mature Scala libraries, while nicely complementing the existing parallel collections of Scala's standard API for shared memory use. | {
"cite_N": [
"@cite_0",
"@cite_29",
"@cite_18"
],
"mid": [
"2127617483",
"2189465200",
"2104959526"
],
"abstract": [
"Eden extends the non-strict functional language Haskell with constructs to control parallel evaluation of processes. Although processes are defined explicitly, communication and synchronisation issues are handled in a way transparent to the programmer. In order to offer effective support for parallel evaluation, Eden's coordination constructs override the inherently sequential demand-driven (lazy) evaluation strategy of its computation language Haskell. Eden is a general-purpose parallel functional language suitable for developing sophisticated skeletons – which simplify parallel programming immensely – as well as for exploiting more irregular parallelism that cannot easily be captured by a predefined skeleton. The paper gives a comprehensive description of Eden, its semantics, its skeleton-based programming methodology – which is applied in three case studies – its implementation and performance. Furthermore it points at many additional results that have been achieved in the context of the Eden project.",
"MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time.",
"Most applications manipulate structured data. Modern languages and platforms provide collection frameworks with basic data structures like lists, hashtables and trees. These data structures have a range of predefined operations which include mapping, filtering or finding elements. Such bulk operations traverse the collection and process the elements sequentially. Their implementation relies on iterators, which are not applicable to parallel operations due to their sequential nature. We present an approach to parallelizing collection operations in a generic way, used to factor out common parallel operations in collection libraries. Our framework is easy to use and straightforward to extend to new collections. We show how to implement concrete parallel collections such as parallel arrays and parallel hash maps, proposing an efficient solution to parallel hash map construction. Finally, we give benchmarks showing the performance of parallel collection operations."
]
} |
1406.6163 | 2951369600 | We developed a Functional object-oriented Parallel framework (FooPar) for high-level high-performance computing in Scala. Central to this framework are Distributed Memory Parallel Data structures (DPDs), i.e., collections of data distributed in a shared nothing system together with parallel operations on these data. In this paper, we first present FooPar's architecture and the idea of DPDs and group communications. Then, we show how DPDs can be implemented elegantly and efficiently in Scala based on the Traversable Builder pattern, unifying Functional and Object-Oriented Programming. We prove the correctness and safety of one communication algorithm and show how specification testing (via ScalaCheck) can be used to bridge the gap between proof and implementation. Furthermore, we show that the group communication operations of FooPar outperform those of the MPJ Express open source MPI-bindings for Java, both asymptotically and empirically. FooPar has already been shown to be capable of achieving close-to-optimal performance for dense matrix-matrix multiplication via JNI. In this article, we present results on a parallel implementation of the Floyd-Warshall algorithm in FooPar, achieving more than 94 efficiency compared to the serial version on a cluster using 100 cores for matrices of dimension 38000 x 38000. | While Scala's parallel collections are limited to shared memory systems, works both in shared nothing as well as shared memory architectures. Taking some inspiration from MPI @cite_7 , implements most of the essential operations found in MPI in a more convenient and abstract level as well as expanding upon them. As an example, supports reductions with arbitrary types and variable sizes, e.g. reduction by list or string concatenation is entirely possible and convenient in (however inherently unscalable). As an addition, performance impact from the use of concatenation or other size-increasing operations is directly visible through the provided asymptotic runtime analysis for operations on the Distributed Memory Parallel Data Structures (cf. ). | {
"cite_N": [
"@cite_7"
],
"mid": [
"1825216778"
],
"abstract": [
"A large number of MPI implementations are currently available, each of which emphasize different aspects of high-performance computing or are intended to solve a specific research problem. The result is a myriad of incompatible MPI implementations, all of which require separate installation, and the combination of which present significant logistical challenges for end users. Building upon prior research, and influenced by experience gained from the code bases of the LAM MPI, LA-MPI, and FT-MPI projects, Open MPI is an all-new, production-quality MPI-2 implementation that is fundamentally centered around component concepts. Open MPI provides a unique combination of novel features previously unavailable in an open-source, production-quality implementation of MPI. Its component architecture provides both a stable platform for third-party research as well as enabling the run-time composition of independent software add-ons. This paper presents a high-level overview the goals, design, and implementation of Open MPI."
]
} |
1406.6163 | 2951369600 | We developed a Functional object-oriented Parallel framework (FooPar) for high-level high-performance computing in Scala. Central to this framework are Distributed Memory Parallel Data structures (DPDs), i.e., collections of data distributed in a shared nothing system together with parallel operations on these data. In this paper, we first present FooPar's architecture and the idea of DPDs and group communications. Then, we show how DPDs can be implemented elegantly and efficiently in Scala based on the Traversable Builder pattern, unifying Functional and Object-Oriented Programming. We prove the correctness and safety of one communication algorithm and show how specification testing (via ScalaCheck) can be used to bridge the gap between proof and implementation. Furthermore, we show that the group communication operations of FooPar outperform those of the MPJ Express open source MPI-bindings for Java, both asymptotically and empirically. FooPar has already been shown to be capable of achieving close-to-optimal performance for dense matrix-matrix multiplication via JNI. In this article, we present results on a parallel implementation of the Floyd-Warshall algorithm in FooPar, achieving more than 94 efficiency compared to the serial version on a cluster using 100 cores for matrices of dimension 38000 x 38000. | shares goals with the partitioned global address space (PGAS) programming model in the sense that the reference semantics of shared memory systems is combined with the SPMD style of programming. Prominent examples of PGAS are Unified Parallel C, or Co-Array Fortran among others @cite_23 . Focusing on performance and programmability for next-generation architectures, novel languages like X10 and Chapel provide richer execution frameworks and also allow asynchronous creation of tasks @cite_19 . All these languages either resemble and extend existing languages or are designed from scratch; their features are usually accessed via syntactic sugar. , in contrast, is more oriented towards abstraction by employing distributed data structures and combining this with the mathematical abstraction inherently integrated in functional languages like Scala. This approach is somewhat similar to that of STAPL @cite_25 , however, the combination with functional programming has the potential to be more productive and produce more analyzable code. | {
"cite_N": [
"@cite_19",
"@cite_25",
"@cite_23"
],
"mid": [
"1978768502",
"1963494026",
""
],
"abstract": [
"We present a summary of the current state of DARPA's HPCS language project. We describe the challenges facing any new language for scalable parallel computing, including the strong competition presented by MPI and the existing Partitioned Global Address Space (PGAS) Languages. We identify some of the major features of the proposed languages, using MPI and the PGAS languages for comparison, and describe the opportunities for higher productivity along with the implementation challenges. Finally, we present the conclusions of a recent workshop in which a concrete plan for the next few years was proposed.",
"The Standard Template Adaptive Parallel Library (stapl) is a high-productivity parallel programming framework that extends C++ and stl with unified support for shared and distributed memory parallelism. stapl provides distributed data structures (pContainers) and parallel algorithms (pAlgorithms) and a generic methodology for extending them to provide customized functionality. The stapl runtime system provides the abstraction for communication and program execution. In this paper, we describe the major components of stapl and present performance results for both algorithms and data structures showing scalability up to tens of thousands of processors.",
""
]
} |
1406.5977 | 1599885642 | Applications in cyber-physical systems are increasingly coupled with online instruments to perform long running, continuous data processing. Such "always on" dataflow applications are dynamic, where they need to change the applications logic and performance at runtime, in response to external operational needs. Floe is a continuous dataflow framework that is designed to be adaptive for dynamic applications on Cloud infrastructure. It offers advanced dataflow patterns like BSP and MapReduce for flexible and holistic composition of streams and files, and supports dynamic recomposition at runtime with minimal impact on the execution. Adaptive resource allocation strategies allow our framework to effectively use elastic Cloud resources to meet varying data rates. We illustrate the design patterns of Floe by running an integration pipeline and a tweet clustering application from the Smart Power Grids domain on a private Eucalyptus Cloud. The responsiveness of our resource adaptation is validated through simulations for periodic, bursty and random workloads. | Scientific workflow systems such as Kepler COMAD @cite_6 and Pegasus @cite_12 have explored different facets of distributed and data intensive processing of control and data flow applications. Most of these systems have been developed for one-shot batch processing workflows processing using files or objects while some such as Kepler COMAD also support operations over collections. There has also been some work on incorporating stream processing into these workflow systems such as Kepler @cite_2 and Confluence @cite_14 , however these systems lack support for rich programming abstractions for stream based execution, especially support for dynamic key mapping (MapReduce) and BSP models that are supported by . | {
"cite_N": [
"@cite_14",
"@cite_12",
"@cite_6",
"@cite_2"
],
"mid": [
"2019936179",
"2110372301",
"1983833794",
"2165942049"
],
"abstract": [
"Data streams have become pervasive and data production rates are increasing exponentially, driven by advances in technology, for example the proliferation of sensors, smart phones, and their applications. This fact effectuates an unprecedented opportunity to build real-time monitoring and analytics applications, which when used collaboratively and interactively, will provide insights to every aspect of our environment, both in the business and scientific domains. In our previous work, we have identified the need for workflow management systems which are capable of orchestrating the processing of multiple heterogeneous data streams, while enabling their users to interact collaboratively with the workflows in real time. In this paper, we describe CONFLuEnCE (CONtinuous workFLow ExeCution Engine), which is an implementation of our continuous workflow model. CONFLuEnCE is built on top of Kepler, an existing workflow management system, by fusing stream semantics and stream processing methods as another computational domain. Furthermore, we explicate our experiences in designing and implementing real-life business and scientific continuous workflow monitoring applications, which attest to the ease of use and applicability of our system.",
"1 Motivation Grid computing has made great progress in the last few years. The basic mechanisms for accessing remote resources have been developed as part of the Globus Toolkit and are now widely deployed and used. Among such mechanisms are: § Information services, which allow for the discovery and monitoring of resources. The information provided can be used to find the available resources and select the resources which are the most appropriate for the task. § Security services, which allow users and resources to mutually authenticate and allows the resources to authorize users based on local policies. § Resource management, which allows for the scheduling of jobs on particular resources. § Data management services, which enable users and applications to manage large, distributed and replicated data sets. Some of the available services deal with locating particular data sets, others with efficiently moving large amounts of data across wide area networks. With the use of the above mechanisms, one can manually find out about the resources and schedule the desired computations and data movements. However, this process is time consuming and can potentially be complex. As the result it is becoming increasingly necessary to develop higher level services which can automate the process and provide an adequate level of performance and reliability.",
"Many scientific disciplines are now data and information driven, and new scientific knowledge is often gained by scientists putting together data analysis and knowledge discovery “pipelines”. A related trend is that more and more scientific communities realize the benefits of sharing their data and computational services, and are thus contributing to a distributed data and computational community infrastructure (a.k.a. “the Grid”). However, this infrastructure is only a means to an end and scientists ideally should be bothered little with its existence. The goal is for scientists to focus on development and use of what we call scientific workflows. These are networks of analytical steps that may involve, e.g., database access and querying steps, data analysis and mining steps, and many other steps including computationally intensive jobs on high performance cluster computers. In this paper we describe characteristics of and requirements for scientific workflows as identified in a number of our application projects. We then elaborate on Kepler, a particular scientific workflow system, currently under development across a number of scientific data management projects. We describe some key features of Kepler and its underlying Ptolemyii system, planned extensions, and areas of future research. Kepler is a communitydriven, open source project, and we always welcome related projects and new contributors to join.",
"Scientific workflows are commonplace in eScience applications. Yet, the lack of integrated support for data models, including streaming data, structured collections and files, is limiting the ability of workflows to support emerging applications in energy informatics that are stream oriented. This is compounded by the absence of Cloud data services that support reliable and performant streams. In this paper, we propose and present a scientific workflow framework that supports streams as first-class data, and is optimized for performant and reliable execution across desktop and Cloud platforms. The workflow framework features and its empirical evaluation on a private Eucalyptus cloud are presented."
]
} |
1406.5977 | 1599885642 | Applications in cyber-physical systems are increasingly coupled with online instruments to perform long running, continuous data processing. Such "always on" dataflow applications are dynamic, where they need to change the applications logic and performance at runtime, in response to external operational needs. Floe is a continuous dataflow framework that is designed to be adaptive for dynamic applications on Cloud infrastructure. It offers advanced dataflow patterns like BSP and MapReduce for flexible and holistic composition of streams and files, and supports dynamic recomposition at runtime with minimal impact on the execution. Adaptive resource allocation strategies allow our framework to effectively use elastic Cloud resources to meet varying data rates. We illustrate the design patterns of Floe by running an integration pipeline and a tweet clustering application from the Smart Power Grids domain on a private Eucalyptus Cloud. The responsiveness of our resource adaptation is validated through simulations for periodic, bursty and random workloads. | Systems such as granules @cite_0 , on the other hand, focus on particular abstractions such as MapReduce but fail to provide a generic stream processing model that can integrate various dataflow patterns with advanced patterns such as MapReduce. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2150603860"
],
"abstract": [
"Cloud computing has gained significant traction in recent years. The Map-Reduce framework is currently the most dominant programming model in cloud computing settings. In this paper, we describe Granules, a lightweight, streaming-based runtime for cloud computing which incorporates support for the Map-Reduce framework. Granules provides rich lifecycle support for developing scientific applications with support for iterative, periodic and data driven semantics for individual computations and pipelines. We describe our support for variants of the Map-Reduce framework. The paper presents a survey of related work in this area. Finally, this paper describes our performance evaluation of various aspects of the system, including (where possible) comparisons with other comparable systems."
]
} |
1406.5977 | 1599885642 | Applications in cyber-physical systems are increasingly coupled with online instruments to perform long running, continuous data processing. Such "always on" dataflow applications are dynamic, where they need to change the applications logic and performance at runtime, in response to external operational needs. Floe is a continuous dataflow framework that is designed to be adaptive for dynamic applications on Cloud infrastructure. It offers advanced dataflow patterns like BSP and MapReduce for flexible and holistic composition of streams and files, and supports dynamic recomposition at runtime with minimal impact on the execution. Adaptive resource allocation strategies allow our framework to effectively use elastic Cloud resources to meet varying data rates. We illustrate the design patterns of Floe by running an integration pipeline and a tweet clustering application from the Smart Power Grids domain on a private Eucalyptus Cloud. The responsiveness of our resource adaptation is validated through simulations for periodic, bursty and random workloads. | S4 @cite_19 and IBM Stream processing core (SPC) @cite_3 provide a very similar distributed stream processing environment as . It also provides generic programming model where the use can define the processing elements, similar to 's pellets, that can be composed to form a continuous dataflow application. However, unlike , these system lack elastic scaling based on the dynamic data rate. | {
"cite_N": [
"@cite_19",
"@cite_3"
],
"mid": [
"2119745055",
"2162957845"
],
"abstract": [
"S4 is a general-purpose, distributed, scalable, partially fault-tolerant, pluggable platform that allows programmers to easily develop applications for processing continuous unbounded streams of data. Keyed data events are routed with affinity to Processing Elements (PEs), which consume the events and do one or both of the following: (1) emit one or more events which may be consumed by other PEs, (2) publish results. The architecture resembles the Actors model, providing semantics of encapsulation and location transparency, thus allowing applications to be massively concurrent while exposing a simple programming interface to application developers. In this paper, we outline the S4 architecture in detail, describe various applications, including real-life deployments. Our design is primarily driven by large scale applications for data mining and machine learning in a production environment. We show that the S4 design is surprisingly flexible and lends itself to run in large clusters built with commodity hardware.",
"The Stream Processing Core (SPC) is distributed stream processing middleware designed to support applications that extract information from a large number of digital data streams. In this paper, we describe the SPC programming model which, to the best of our knowledge, is the first to support stream-mining applications using a subscription-like model for specifying stream connections as well as to provide support for non-relational operators. This enables stream-mining applications to tap into, analyze and track an ever-changing array of data streams which may contain information relevant to the streaming-queries placed on it. We describe the design, implementation, and experimental evaluation of the SPC distributed middleware, which deploys applications on to the running system in an incremental fashion, making stream connections as required. Using micro-benchmarks and a representative large-scale synthetic stream-mining application, we evaluate the performance of the control and data paths of the SPC middleware."
]
} |
1406.5977 | 1599885642 | Applications in cyber-physical systems are increasingly coupled with online instruments to perform long running, continuous data processing. Such "always on" dataflow applications are dynamic, where they need to change the applications logic and performance at runtime, in response to external operational needs. Floe is a continuous dataflow framework that is designed to be adaptive for dynamic applications on Cloud infrastructure. It offers advanced dataflow patterns like BSP and MapReduce for flexible and holistic composition of streams and files, and supports dynamic recomposition at runtime with minimal impact on the execution. Adaptive resource allocation strategies allow our framework to effectively use elastic Cloud resources to meet varying data rates. We illustrate the design patterns of Floe by running an integration pipeline and a tweet clustering application from the Smart Power Grids domain on a private Eucalyptus Cloud. The responsiveness of our resource adaptation is validated through simulations for periodic, bursty and random workloads. | StreamCloud @cite_4 is another stream processing system that focuses on scalability with respect to the stream data rates. StreamCloud achieves this by partitioning the incoming data stream with semantic awareness about the operator in the downstream instance and achieves scalability with intra-operator parallelism and dynamic resource allocation. Esc @cite_5 is a novel elastic stream processing platform that offers a streaming model using Cloud resources on-demand to adapt to changing computational demands. Esc allows the end user to specify adaptation strategies in terms of data partitioning as well splitting individual tasks into smaller parallel tasks based on the workload. The optimizations in do not assume any pellet semantics and the optimizations are performed implicitly by the framework by exploiting the data parallel nature of the operations or explicitly by the application composer by using the data based choice or round robin split patterns for load balancing. | {
"cite_N": [
"@cite_5",
"@cite_4"
],
"mid": [
"2164602789",
"2143357067"
],
"abstract": [
"Today, most tools for processing big data are batch-oriented. However, many scenarios require continuous, online processing of data streams and events. We present ESC, a new stream computing engine. It is designed for computations with real-time demands, such as online data mining. It offers a simple programming model in which programs are specified by directed acyclic graphs (DAGs). The DAG defines the data flow of a program, vertices represent operations applied to the data. The data which are streaming through the graph are expressed as key value pairs. ESC allows programmers to focus on the problem at hand and deals with distribution and fault tolerance. Furthermore, it is able to adapt to changing computational demands. In the cloud, ESC can dynamically attach and release machines to adjust the computational capacities to the current needs. This is crucial for stream computing since the amount of data fed into the system is not under the platform's control. We substantiate the concepts we propose in this paper with an evaluation based on a high-frequency trading scenario.",
"Data streaming has become an important paradigm for the real-time processing of continuous data flows in domains such as finance, telecommunications, networking, Some applications in these domains require to process massive data flows that current technology is unable to manage, that is, streams that, even for a single query operator, require the capacity of potentially many machines. Research efforts on data streaming have mainly focused on scaling in the number of queries or query operators, but overlooked the scalability issue with respect to the stream volume. In this paper, we present StreamCloud a large scale data streaming system for processing large data stream volumes. We focus on how to parallelize continuous queries to obtain a highly scalable data streaming infrastructure. StreamCloud goes beyond the state of the art by using a novel parallelization technique that splits queries into subqueries that are allocated to independent sets of nodes in a way that minimizes the distribution overhead. StreamCloud is implemented as a middleware and is highly independent of the underlying data streaming engine. We explore and evaluate different strategies to parallelize data streaming and tackle with the main bottlenecks and overheads to achieve scalability. The paper presents the system design, implementation and a thorough evaluation of the scalability of the fully implemented system."
]
} |
1406.5977 | 1599885642 | Applications in cyber-physical systems are increasingly coupled with online instruments to perform long running, continuous data processing. Such "always on" dataflow applications are dynamic, where they need to change the applications logic and performance at runtime, in response to external operational needs. Floe is a continuous dataflow framework that is designed to be adaptive for dynamic applications on Cloud infrastructure. It offers advanced dataflow patterns like BSP and MapReduce for flexible and holistic composition of streams and files, and supports dynamic recomposition at runtime with minimal impact on the execution. Adaptive resource allocation strategies allow our framework to effectively use elastic Cloud resources to meet varying data rates. We illustrate the design patterns of Floe by running an integration pipeline and a tweet clustering application from the Smart Power Grids domain on a private Eucalyptus Cloud. The responsiveness of our resource adaptation is validated through simulations for periodic, bursty and random workloads. | Application dynamism have be explored using concepts such as frames, templates and dynamic embedding @cite_15 . However, these are in the context of scientific workflows where the choice for the dynamic task is finalized before the execution of that task begins which is in contrast to the requirement in the continuous dataflows where an executing task needs to be seamlessly updated at runtime as is support by . | {
"cite_N": [
"@cite_15"
],
"mid": [
"1575598478"
],
"abstract": [
"While most scientific workflows systems are based on dataflow, some amount of control-flow modeling is often necessary for engineering fault-tolerant, robust, and adaptive workflows. However, control-flow modeling within dataflow often results in workflow specifications that are hard to comprehend, reuse, and maintain. We describe new modeling constructs to address these issues that provide a structured approach for modeling control-flow within scientific workflows, and discuss their implementation within the Kepler scientific workflow system."
]
} |
1406.5457 | 1950786671 | Program behavior may depend on parameters, which are either configured before compilation time, or provided at run-time, e.g., by sensors or other input devices. Parametric program analysis explores how different parameter settings may affect the program behavior. In order to infer invariants depending on parameters, we introduce parametric strategy iteration. This algorithm determines the precise least solution of systems of integer equations depending on surplus parameters. Conceptually, our algorithm performs ordinary strategy iteration on the given integer system for all possible parameter settings in parallel. This is made possible by means of region trees to represent the occurring piecewise affine functions. We indicate that each required operation on these trees is polynomial-time if only constantly many parameters are involved. Parametric strategy iteration for systems of integer equations allows to construct parametric integer interval analysis as well as parametric analysis of differences of integer variables. It thus provides a general technique to realize precise parametric program analysis if numerical properties of integer variables are of concern. | Since long, relational program analyses, e.g., by means of polyhedra have been around @cite_14 @cite_5 which also allow to infer linear relationships between parameters and program variables. The resulting invariants, however, are convex and thus do not allow to differentiate between different linear dependencies in different regions. In order to obtain invariants as precise as ours, one would have to combine polyhedral domains with some form of trace partitioning @cite_9 . These kinds of analysis, though, must rely on widening and narrowing to enforce termination, whereas our algorithms avoid widening and narrowing completely and directly compute least solutions, i.e., the best possible parametric invariants. | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_14"
],
"mid": [
"2062873592",
"1515906028",
"2132661148"
],
"abstract": [
"Since its inception as a student project in 2001, initially just for the handling (as the name implies) of convex polyhedra, the Parma Polyhedra Library has been continuously improved and extended by joining scrupulous research on the theoretical foundations of (possibly non-convex) numerical abstractions to a total adherence to the best available practices in software development. Even though it is still not fully mature and functionally complete, the Parma Polyhedra Library already offers a combination of functionality, reliability, usability and performance that is not matched by similar, freely available libraries. In this paper, we present the main features of the current version of the library, emphasizing those that distinguish it from other similar libraries and those that are important for applications in the field of analysis and verification of hardware and software systems.",
"When designing a tractable static analysis, one usually needs to approximate the trace semantics. This paper proposes a systematic way of regaining some knowledge about the traces by performing the abstraction over a partition of the set of traces instead of the set itself. This systematic refinement is not only theoretical but tractable: we give automatic procedures to build pertinent partitions of the traces and show the efficiency on an implementation integrated in the Astree static analyzer, a tool capable of dealing with industrial-size software.",
""
]
} |
1406.5457 | 1950786671 | Program behavior may depend on parameters, which are either configured before compilation time, or provided at run-time, e.g., by sensors or other input devices. Parametric program analysis explores how different parameter settings may affect the program behavior. In order to infer invariants depending on parameters, we introduce parametric strategy iteration. This algorithm determines the precise least solution of systems of integer equations depending on surplus parameters. Conceptually, our algorithm performs ordinary strategy iteration on the given integer system for all possible parameter settings in parallel. This is made possible by means of region trees to represent the occurring piecewise affine functions. We indicate that each required operation on these trees is polynomial-time if only constantly many parameters are involved. Parametric strategy iteration for systems of integer equations allows to construct parametric integer interval analysis as well as parametric analysis of differences of integer variables. It thus provides a general technique to realize precise parametric program analysis if numerical properties of integer variables are of concern. | Parametric analysis of a different kind has also been proposed by Reineke @cite_2 in the context of worst-case execution time (WCET). They rely on parametric linear programming as implemented by the PIP tool @cite_17 and infer the dependence of the WCET on architecture parameters such as the chache size by means of black box sampling of the WCETs obtained for different parameter settings. | {
"cite_N": [
"@cite_17",
"@cite_2"
],
"mid": [
"2340604309",
"1964765524"
],
"abstract": [
"L'analyse semantique des programmes informatiques conduit a la resolution de problemes de programmation parametrique entiere. L'article s'est ainsi consacre a la construction d'un algorithme de ce type",
"Platforms are families of microarchitectures that implement the same instruction set architecture but that differ in architectural parameters, such as frequency, memory latencies, or memory sizes. The choice of these parameters influences execution time, implementation cost, and energy consumption."
]
} |
1406.5457 | 1950786671 | Program behavior may depend on parameters, which are either configured before compilation time, or provided at run-time, e.g., by sensors or other input devices. Parametric program analysis explores how different parameter settings may affect the program behavior. In order to infer invariants depending on parameters, we introduce parametric strategy iteration. This algorithm determines the precise least solution of systems of integer equations depending on surplus parameters. Conceptually, our algorithm performs ordinary strategy iteration on the given integer system for all possible parameter settings in parallel. This is made possible by means of region trees to represent the occurring piecewise affine functions. We indicate that each required operation on these trees is polynomial-time if only constantly many parameters are involved. Parametric strategy iteration for systems of integer equations allows to construct parametric integer interval analysis as well as parametric analysis of differences of integer variables. It thus provides a general technique to realize precise parametric program analysis if numerical properties of integer variables are of concern. | Our data structure of region trees is a refinement of the tree-like data-structure Quast provided by the PIP tool @cite_17 . Similar data-structures are also used by Monniaux @cite_18 to represent the resulting invariants, and by @cite_20 to differentiate between different phases of a loop iteration. In our implementation, we additionally enforce a total ordering on the constraints in the tree nodes and allow arbitrary values at the leaves. Total orderings on constraints have also been proposed for linear decision diagrams @cite_21 . Variants of LDDs later have been used to implement non-convex linear program invariants @cite_16 @cite_13 and in @cite_6 for representing linear arithmetic formulas when solving predicate abstraction queries. In our application, sharing of subtrees is not helpful, since each node @math represents a conjunction of the inequalities which is constituted by the path reaching @math from the root of the data-structure. Moreover, our application requires that the leaves of the data-structure are not just annotated with a Boolean value (as for LDDs), but with values from various sets, namely strategic choices, affine functions or even pairs thereof. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_6",
"@cite_16",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"2140678594",
"2122719199",
"",
"1572129784",
"",
"50815725",
"2340604309"
],
"abstract": [
"We propose a method for automatically generating abstract transformers for static analysis by abstract interpretation. The method focuses on linear constraints on programs operating on rational, real or floating-point variables and containing linear assignments and tests. In addition to loop-free code, the same method also applies for obtaining least fixed points as functions of the precondition, which permits the analysis of loops and recursive functions. Our algorithms are based on new quantifier elimination and symbolic manipulation techniques. Given the specification of an abstract domain, and a program block, our method automatically outputs an implementation of the corresponding abstract transformer. It is thus a form of program transformation. The motivation of our work is data-flow synchronous programming languages, used for building control-command embedded systems, but it also applies to imperative and functional programming.",
"Boolean manipulation and existential quantification of numeric variables from linear arithmetic (LA) formulas is at the core of many program analysis and software model checking techniques (e.g., predicate abstraction). We present a new data structure, Linear Decision Diagrams (LDDs), to represent formulas in LA and its fragments, which has certain properties that make it efficient for such tasks. LDDs can be seen as an extension of Difference Decision Diagrams (DDDs) to full LA. Beyond this extension, we make three key contributions. First, we extend sifting-based dynamic variable ordering (DVO) from BDDs to LDDs. Second, we develop, implement, and evaluate several algorithms for existential quantification. Third, we implement LDDs inside CUDD, a state-of-the-art BDD package, and evaluate them on a large benchmark consisting of 850 functions derived from the source code of 25 open source programs. Overall, our experiments indicate that LDDs are an effective data structure for program analysis tasks.",
"",
"Numeric abstract domains are widely used in program analyses. The simplest numeric domains over-approximate disjunction by an imprecise join, typically yielding path-insensitive analyses. This problem is addressed by domain refinements, such as finite powersets, which provide exact disjunction. However, developing correct and efficient disjunctive refinement is challenging. First, there must be an efficient way to represent and manipulate abstract values. The simple approach of using \"sets of base abstract values\" is often not scalable. Second, while a widening must strike the right balance between precision and the rate of convergence, it is notoriously hard to get correct. In this paper, we present an implementation of the Boxes abstract domain - a refinement of the well-known Box (or Intervals) domain with finite disjunctions. An element of Boxes is a finite union of boxes, i.e., expressible as a propositional formula over upper- and lower-bounds constraints. Our implementation is symbolic, and weds the strengths of Binary Decision Diagrams (BDDs) and Box. The complexity of the operations (meet, join, transfer functions, and widening) is polynomial in the size of the operands. Empirical evaluation indicates that the performance of Boxes is superior to other existing refinements of Box with comparable expressiveness.",
"",
"Verification using static analysis often hinges on precise numeric invariants. Numeric domains of infinite height can infer these invariants, but require widening narrowing which complicates the fixpoint computation and is often too imprecise. As a consequence, several strategies have been proposed to prevent a precision loss during widening or to narrow in a smarter way. Most of these strategies are difficult to retrofit into an existing analysis as they either require a pre-analysis, an on-the-fly modification of the CFG, or modifications to the fixpoint algorithm. We propose to encode widening and its various refinements from the literature as cofibered abstract domains that wrap standard numeric domains, thereby providing a modular way to add numeric analysis to any static analysis, that is, without modifying the fixpoint engine. Since these domains cannot make any assumptions about the structure of the program, our approach is suitable to the analysis of executables, where the (potentially irreducible) CFG is re-constructed on-the-fly. Moreover, our domain-based approach not only mirrors the precision of more intrusive approaches in the literature but also requires fewer iterations to find a fixpoint of loops than many heuristics that merely aim for precision.",
"L'analyse semantique des programmes informatiques conduit a la resolution de problemes de programmation parametrique entiere. L'article s'est ainsi consacre a la construction d'un algorithme de ce type"
]
} |
1406.5670 | 2951755740 | 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks. | There has been a large body of insightful research on analyzing 3D CAD model collections. Most of the works @cite_27 @cite_6 @cite_15 use an assembly-based approach to build deformable part-based models. These methods are limited to a specific class of shapes with small variations, with surface correspondence being one of the key problems in such approaches. Since we are interested in shapes across a variety of objects with large variations and part annotation is tedious and expensive, assembly-based modeling can be rather cumbersome. For surface reconstruction of corrupted scanning input, most related works @cite_26 @cite_31 are largely based on smooth interpolation or extrapolation. These approaches can only tackle small missing holes or deficiencies. Template-based methods @cite_7 are able to deal with large space corruption but are mostly limited by the quality of available templates and often do not provide different semantic interpretations of reconstructions. | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_15",
"@cite_6",
"@cite_27",
"@cite_31"
],
"mid": [
"2002386777",
"2068337491",
"2092773680",
"2077678691",
"2161960196",
"2050893212"
],
"abstract": [
"When designing novel algorithms for geometric processing and analysis, researchers often assume that the input conforms to several requirements. On the other hand, polygon meshes obtained from acquisition of real-world objects typically exhibit several defects, and thus are not appropriate for a widespread exploitation. In this paper, an algorithm is presented that strives to convert a low-quality digitized polygon mesh to a single manifold and watertight triangle mesh without degenerate or intersecting elements. Differently from most existing approaches that globally resample the model to produce a fixed version, the algorithm presented here attempts to modify the input mesh only locally within the neighborhood of undesired configurations. After having converted the input to a single combinatorial manifold, the algorithm proceeds iteratively by removing growing neighborhoods of undesired elements and by patching the resulting surface gaps until all the “defects\" are removed. Though this heuristic approach is not guaranteed to converge, it was tested on more than 400 low-quality models and always succeeded. Furthermore, with respect to similar existing algorithms, it proved to be computationally efficient and produced more accurate results while using fewer triangles.",
"This paper presents a technique that allows quick conversion of acquired low-quality data from consumer-level scanning devices to high-quality 3D models with labeled semantic parts and meanwhile their assembly reasonably close to the underlying geometry. This is achieved by a novel structure recovery approach that is essentially local to global and bottom up, enabling the creation of new structures by assembling existing labeled parts with respect to the acquired data. We demonstrate that using only a small-scale shape repository, our part assembly approach is able to faithfully recover a variety of high-level structures from only a single-view scan of man-made objects acquired by the Kinect system, containing a highly noisy, incomplete 3D point cloud and a corresponding RGB image.",
"We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis.",
"In this paper, we investigate a data-driven synthesis approach to constructing 3D geometric surface models. We provide methods with which a user can search a large database of 3D meshes to find parts of interest, cut the desired parts out of the meshes with intelligent scissoring, and composite them together in different ways to form new objects. The main benefit of this approach is that it is both easy to learn and able to produce highly detailed geometric models -- the conceptual design for new models comes from the user, while the geometric details come from examples in the database. The focus of the paper is on the main research issues motivated by the proposed approach: (1) interactive segmentation of 3D surfaces, (2) shape-based search to find 3D models with parts matching a query, and (3) composition of parts to form new models. We provide new research contributions on all three topics and incorporate them into a prototype modeling system. Experience with our prototype system indicates that it allows untrained users to create interesting and detailed 3D models.",
"Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling is the identification of relevant components to be presented to the user. In this paper, we introduce a probabilistic reasoning approach to this problem. Given a repository of shapes, our approach learns a probabilistic graphical model that encodes semantic and geometric relationships among shape components. The probabilistic model is used to present components that are semantically and stylistically compatible with the 3D model that is being assembled. Our experiments indicate that the probabilistic model increases the relevance of presented components.",
"We present cone carving, a novel space carving technique supporting topologically correct surface reconstruction from an incomplete scanned point cloud. The technique utilizes the point samples not only for local surface position estimation but also to obtain global visibility information under the assumption that each acquired point is visible from a point lying outside the shape. This enables associating each point with a generalized cone, called the visibility cone, that carves a portion of the outside ambient space of the shape from the inside out. These cones collectively provide a means to better approximate the signed distances to the shape specifically near regions containing large holes in the scan, allowing one to infer the correct surface topology. Combining the new distance measure with conventional RBF, we define an implicit function whose zero level set defines the surface of the shape. We demonstrate the utility of cone carving in coping with significant missing data and raw scans from a commercial 3D scanner as well as synthetic input."
]
} |
1406.5670 | 2951755740 | 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks. | The great generative power of deep learning models has allowed researchers to build deep generative models for 2D shapes: most notably the DBN @cite_12 to generate handwritten digits and ShapeBM @cite_17 to generate horses, etc. These models are able to effectively capture intra-class variations. We also desire this generative ability for shape reconstruction but we focus on more complex real world object shapes in 3D. For 2.5D deep learning, @cite_34 and @cite_21 build discriminative convolutional neural nets to model images and depth maps. Although their algorithms are applied to depth maps, they use depth as an extra 2D channel instead of modeling full 3D. Unlike @cite_34 , our model learns a shape distribution over a voxel grid. To the best of our knowledge, we are the first work to build 3D deep learning models. To deal with the dimensionality of high resolution voxels, inspired by @cite_39 The model is precisely a convolutional DBM where all the connections are undirected, while ours is a convolutional DBN. , we apply the same convolution technique in our model. | {
"cite_N": [
"@cite_21",
"@cite_39",
"@cite_34",
"@cite_12",
"@cite_17"
],
"mid": [
"2952771913",
"1971014294",
"",
"2136922672",
"2075505763"
],
"abstract": [
"In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an average precision of 37.3 , which is a 56 relative improvement over existing methods. We then focus on the task of instance segmentation where we label pixels belonging to object instances found by our detector. For this task, we propose a decision forest approach that classifies pixels in the detection window as foreground or background using a family of unary and binary tests that query shape and geocentric pose features. Finally, we use the output from our object detectors in an existing superpixel classification framework for semantic scene segmentation and achieve a 24 relative improvement over current state-of-the-art for the object categories that we study. We believe advances such as those represented in this paper will facilitate the use of perception in fields like robotics.",
"There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks (DBNs); however, scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model that scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique that shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.",
"",
"We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.",
"A good model of object shape is essential in applications such as segmentation, detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shapes can help where object boundaries are noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to parts of the objects. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of deep Boltzmann machine (Salakhutdinov and Hinton, International Conference on Artificial Intelligence and Statistics, 2009) that we call a Shape Boltzmann Machine (SBM) for the task of modeling foreground background (binary) and parts-based (categorical) shape images. We show that the SBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the SBM learns distributions that are qualitatively and quantitatively better than existing models for this task."
]
} |
1406.5670 | 2951755740 | 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks. | Unlike static object recognition in a single image, the sensor in active object recognition @cite_28 can move to new view points to gain more information about the object. Therefore, the Next-Best-View problem @cite_23 of doing view planning based on current observation arises. Most previous works in active object recognition @cite_24 @cite_35 build their view planning strategy using 2D color information. However this multi-view problem is intrinsically 3D in nature. , @cite_30 @cite_38 implement the idea in real world robots, but they assume that there is only one object associated with each class reducing their problem to instance-level recognition with no intra-class variance. Similar to @cite_24 , we use mutual information to decide the NBV. However, we consider this problem at the precise voxel level allowing us to infer how voxels in a 3D region would contribute to the reduction of recognition uncertainty. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_38",
"@cite_28",
"@cite_24",
"@cite_23"
],
"mid": [
"2093725709",
"2043948511",
"1761788141",
"1505999926",
"2144691386",
"2101024363"
],
"abstract": [
"One of the central problems in computer vision is the detection of semantically important objects and the estimation of their pose. Most of the work in object detection has been based on single image processing and its performance is limited by occlusions and ambiguity in appearance and geometry. This paper proposes an active approach to object detection by controlling the point of view of a mobile depth camera. When an initial static detection phase identifies an object of interest, several hypotheses are made about its class and orientation. The sensor then plans a sequence of viewpoints, which balances the amount of energy used to move with the chance of identifying the correct hypothesis. We formulate an active M-ary hypothesis testing problem, which includes sensor mobility, and solve it using a point-based approximate POMDP algorithm. The validity of our approach is verified through simulation and experiments with real scenes captured by a kinect sensor. The results suggest a significant improvement over static object detection.",
"In this paper we present an algorithm for multi-view object and pose recognition. In contrast to the existing work that focuses on modeling the object using the images only; we exploit the information on the image sequences and their relative 3D positions, because under many circumstances the movements between multi-views are accessible and can be controlled by the users. Thus we can calculate the next optimal place to take a picture based on previous behaviors, and perform the object pose recognition based on these obtained images. The proposed method uses HOG (Histograms of Oriented Gradient) and SVM (Support Vector Machine) as the basic object pose classifier. To learn the optimal action, this algorithm makes use of a boosting method to find the best sequence across the multi-views. Then it exploits the relation between the different view points using the Adaboost algorithm. The experiment shows that the learned sequence improves recognition performance in early steps compared to a randomly selected sequence, and the proposed algorithm can achieve a better recognition accuracy than the baseline method.",
"One of the central problems in computer vision is the detection of semantically important objects and the estimation of their pose. Most of the work in object detection has been based on single image processing and its performance is limited by occlusions and ambiguity in appearance and geometry. This paper proposes an active approach to object detection by controlling the point of view of a mobile depth camera. When an initial static detection phase identifies an object of interest, several hypotheses are made about its class and orientation. The sensor then plans a sequence of views, which balances the amount of energy used to move with the chance of identifying the correct hypothesis. We formulate an active hypothesis testing problem, which includes sensor mobility, and solve it using a point-based approximate POMDP algorithm. The validity of our approach is verified through simulation and real-world experiments with the PR2 robot. The results suggest that our approach outperforms the widely-used greedy view point selection and provides a significant improvement over static object detection.",
"This paper introduces an information-based methodology for view selection that actively exploits prior knowledge about the objects to be found in a scene. The methodology is used to implement an active recognition strategy which effectively puts prior constraints from the object database into the gaze control (planning) loop. Theoretical results are presented and discussed along with promising experimental data.",
"We introduce a formalism for optimal sensor parameter selection for iterative state estimation in static systems. Our optimality criterion is the reduction of uncertainty in the state estimation process, rather than an estimator-specific metric (e.g., minimum mean squared estimate error). The claim is that state estimation becomes more reliable if the uncertainty and ambiguity in the estimation process can be reduced. We use Shannon's information theory to select information-gathering actions that maximize mutual information, thus optimizing the information that the data conveys about the true state of the system. The technique explicitly takes into account the a priori probabilities governing the computation of the mutual information. Thus, a sequential decision process can be formed by treating the a priori probability at a certain time step in the decision process as the a posteriori probability of the previous time step. We demonstrate the benefits of our approach in an object recognition application using an active camera for sequential gaze control and viewpoint selection. We describe experiments with discrete and continuous density representations that suggest the effectiveness of the approach.",
"Recognizing and manipulating objects is an important task for mobile robots performing useful services in everyday environments. In this paper, we develop a system that enables a robot to grasp an object and to move it in front of its depth camera so as to build a 3D surface model of the object. We derive an information gain based variant of the next best view algorithm in order to determine how the manipulator should move the object in front of the camera. By considering occlusions caused by the robot manipulator, our technique also determines when and how the robot should re-grasp the object in order to build a complete model."
]
} |
1406.5383 | 1588027929 | We present a simple noise-robust margin-based active learning algorithm to find homogeneous (passing the origin) linear separators and analyze its error convergence when labels are corrupted by noise. We show that when the imposed noise satisfies the Tsybakov low noise condition (Mammen, Tsybakov, and others 1999; Tsybakov 2004) the algorithm is able to adapt to unknown level of noise and achieves optimal statistical rate up to poly-logarithmic factors. We also derive lower bounds for margin based active learning algorithms under Tsybakov noise conditions (TNC) for the membership query synthesis scenario (Angluin 1988). Our result implies lower bounds for the stream based selective sampling scenario (Cohn 1990) under TNC for some fairly simple data distributions. Quite surprisingly, we show that the sample complexity cannot be improved even if the underlying data distribution is as simple as the uniform distribution on the unit ball. Our proof involves the construction of a well separated hypothesis set on the d-dimensional unit ball along with carefully designed label distributions for the Tsybakov noise condition. Our analysis might provide insights for other forms of lower bounds as well. | A margin-based active learning algorithm for learning homogeneous linear separators was proposed in @cite_8 with its sample complexity analyzed under the Tsybakov low noise condition for the uniform distribution on the unit ball. The algorithm was later extended to log-concave data distributions @cite_1 . Recently @cite_11 introduced a disagreement-based active learning algorithm that works for arbitrary underlying data distributions. For all of the above-mentioned algorithms, given data dimension @math and query budget @math , the excess risk @math is upper bounded by In the @math notation we omit dependency on failure probability @math and polylogarithmic dependency on @math and @math . @math , where @math is a parameter characterizing the noise level in TNC (cf. Eq. ) in Section ). These algorithms are not noise-adaptive; that is, the selection of key algorithm parameters depend on the noise level @math , which may not be available in practice. | {
"cite_N": [
"@cite_11",
"@cite_1",
"@cite_8"
],
"mid": [
"2964242659",
"2144324158",
"2128518360"
],
"abstract": [
"This work establishes distribution-free upper and lower bounds on the minimax label complexity of active learning with general hypothesis classes, under various noise models. The results reveal a number of surprising facts. In particular, under the noise model of Tsybakov (2004), the minimax label complexity of active learning with a VC class is always asymptotically smaller than that of passive learning, and is typically signi_cantly smaller than the best previously-published upper bounds in the active learning literature. In high-noise regimes, it turns out that all active learning problems of a given VC dimension have roughly the same minimax label complexity, which contrasts with well-known results for bounded noise. In low-noise regimes, we find that the label complexity is well-characterized by a simple combinatorial complexity measure we call the star number. Interestingly, we find that almost all of the complexity measures previously explored in the active learning literature have worst-case values exactly equal to the star number. We also propose new active learning strategies that nearly achieve these minimax label complexities.",
"We provide new results concerning label efficient, polynomial time, passive and active learning of linear separators. We prove that active learning provides an exponential improvement over PAC (passive) learning of homogeneous linear separators under nearly log-concave distributions. Building on this, we provide a computationally efficient PAC algorithm with optimal (up to a constant factor) sample complexity for such problems. This resolves an open question concerning the sample complexity of efficient PAC algorithms under the uniform distribution in the unit ball. Moreover, it provides the first bound for a polynomial-time PAC algorithm that is tight for an interesting infinite class of hypothesis functions under a general and natural class of data-distributions, providing significant progress towards a longstanding open question. We also provide new bounds for active and passive learning in the case that the data might not be linearly separable, both in the agnostic case and and under the Tsybakov low-noise condition. To derive our results, we provide new structural results for (nearly) log-concave distributions, which might be of independent interest as well.",
"We present a framework for margin based active learning of linear separators. We instantiate it for a few important cases, some of which have been previously considered in the literature.We analyze the effectiveness of our framework both in the realizable case and in a specific noisy setting related to the Tsybakov small noise condition."
]
} |
1406.5383 | 1588027929 | We present a simple noise-robust margin-based active learning algorithm to find homogeneous (passing the origin) linear separators and analyze its error convergence when labels are corrupted by noise. We show that when the imposed noise satisfies the Tsybakov low noise condition (Mammen, Tsybakov, and others 1999; Tsybakov 2004) the algorithm is able to adapt to unknown level of noise and achieves optimal statistical rate up to poly-logarithmic factors. We also derive lower bounds for margin based active learning algorithms under Tsybakov noise conditions (TNC) for the membership query synthesis scenario (Angluin 1988). Our result implies lower bounds for the stream based selective sampling scenario (Cohn 1990) under TNC for some fairly simple data distributions. Quite surprisingly, we show that the sample complexity cannot be improved even if the underlying data distribution is as simple as the uniform distribution on the unit ball. Our proof involves the construction of a well separated hypothesis set on the d-dimensional unit ball along with carefully designed label distributions for the Tsybakov noise condition. Our analysis might provide insights for other forms of lower bounds as well. | In @cite_4 a noise-robust disagreement-based algorithm was proposed for agnostic active learning. The analysis was further improved in @cite_12 by replacing the disagreement coefficient with a provably smaller quantity. However, their error bounds are slightly worse under our settings, as we discuss in Section . Also, in both analysis the desired accuracy @math is fixed, while in our setting the number of active queries @math is fixed. Under the one-dimensional threshold learning setting, @cite_6 proposed a noise-adaptive active learning algorithm inspired by recent developments of adaptive algorithms for stochastic convex optimization @cite_22 . For multiple dimensions, it was shown recently in @cite_0 that a noise-robust variant of margin-based active learning achieves near optimal noise tolerance. The authors analyzed the maximum amount of adversarial noise an algorithm can tolerate under the constraints of constant excess risk and polylogarithmic sample complexity, which is equivalent to an exponential rate of error convergence. In contrast, we study the rate at which the excess risk (relative to Bayes optimal classifier) converges to zero with number of samples that are not restricted to be polylogarithmic. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_6",
"@cite_0",
"@cite_12"
],
"mid": [
"2062896907",
"1556545893",
"1595255495",
"",
"2097373287"
],
"abstract": [
"We study the rates of convergence in generalization error achievable by active learning under various types of label noise. Additionally, we study the general problem of model selection for active learning with a nested hierarchy of hypothesis classes and propose an algorithm whose error rate provably converges to the best achievable error among classifiers in the hierarchy at a rate adaptive to both the complexity of the optimal classifier and the noise conditions. In particular, we state sufficient conditions for these rates to be dramatically faster than those achievable by passive learning.",
"We discuss non-Euclidean deterministic and stochastic algorithms for optimization problems with strongly and uniformly convex objectives. We provide accuracy bounds for the performance of these algorithms and design methods which are adaptive with respect to the parameters of strong or uniform convexity of the objective: in the case when the total number of iterations @math is fixed, their accuracy coincides, up to a logarithmic in @math factor with the accuracy of optimal algorithms.",
"Interesting theoretical associations have been established by recent papers between the fields of active learning and stochastic convex optimization due to the common role of feedback in sequential querying mechanisms. In this paper, we continue this thread in two parts by exploiting these relations for the first time to yield novel algorithms in both fields, further motivating the study of their intersection. First, inspired by a recent optimization algorithm that was adaptive to unknown uniform convexity parameters, we present a new active learning algorithm for one-dimensional thresholds that can yield minimax rates by adapting to unknown noise parameters. Next, we show that one can perform @math -dimensional stochastic minimization of smooth uniformly convex functions when only granted oracle access to noisy gradient signs along any coordinate instead of real-valued gradients, by using a simple randomized coordinate descent procedure where each line search can be solved by @math -dimensional active learning, provably achieving the same error convergence rate as having the entire real-valued gradient. Combining these two parts yields an algorithm that solves stochastic convex optimization of uniformly convex and smooth functions using only noisy gradient signs by repeatedly performing active learning, achieves optimal rates and is adaptive to all unknown convexity and smoothness parameters.",
"",
"We study agnostic active learning, where the goal is to learn a classifier in a pre-specified hypothesis class interactively with as few label queries as possible, while making no assumptions on the true function generating the labels. The main algorithms for this problem are disagreement-based active learning , which has a high label requirement, and margin-based active learning , which only applies to fairly restricted settings. A major challenge is to find an algorithm which achieves better label complexity, is consistent in an agnostic setting, and applies to general classification problems. In this paper, we provide such an algorithm. Our solution is based on two novel contributions -- a reduction from consistent active learning to confidence-rated prediction with guaranteed error, and a novel confidence-rated predictor."
]
} |
1406.5383 | 1588027929 | We present a simple noise-robust margin-based active learning algorithm to find homogeneous (passing the origin) linear separators and analyze its error convergence when labels are corrupted by noise. We show that when the imposed noise satisfies the Tsybakov low noise condition (Mammen, Tsybakov, and others 1999; Tsybakov 2004) the algorithm is able to adapt to unknown level of noise and achieves optimal statistical rate up to poly-logarithmic factors. We also derive lower bounds for margin based active learning algorithms under Tsybakov noise conditions (TNC) for the membership query synthesis scenario (Angluin 1988). Our result implies lower bounds for the stream based selective sampling scenario (Cohn 1990) under TNC for some fairly simple data distributions. Quite surprisingly, we show that the sample complexity cannot be improved even if the underlying data distribution is as simple as the uniform distribution on the unit ball. Our proof involves the construction of a well separated hypothesis set on the d-dimensional unit ball along with carefully designed label distributions for the Tsybakov noise condition. Our analysis might provide insights for other forms of lower bounds as well. | In terms of negative results, it is well-known that the @math upper bound is tight up to polylogarithmic factors. In particular, Theorem 4.3 in @cite_9 shows that for any stream-based active learning algorithm, there exists a distribution @math satisfying TNC such that the excess risk @math is lower bounded by @math . The marginal data distribution @math is constructed in an adversarial manner and it is unclear whether the same lower bound applies when @math is some simple (e.g., uniform or Gaussian) distribution. @cite_1 proved lower bounds for stream-based active learning under each log-concave data distribution. However, their proof only applies to the separable case and shows an exponential error convergence. In contrast, we consider Tsybakov noise settings with parameter @math , for which polynomial error convergence is expected @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"2056138823",
"2144324158"
],
"abstract": [
"Active learning is a protocol for supervised machine learning, in which a learning algorithm sequentially requests the labels of selected data points from a large pool of unlabeled data. This contrasts with passive learning, where the labeled data are taken at random. The objective in active learning is to produce a highly-accurate classifier, ideally using fewer labels than the number of random labeled data sufficient for passive learning to achieve the same. This article describes recent advances in our understanding of the theoretical benefits of active learning, and implications for the design of effective active learning algorithms. Much of the article focuses on a particular technique, namely disagreement-based active learning, which by now has amassed a mature and coherent literature. It also briefly surveys several alternative approaches from the literature. The emphasis is on theorems regarding the performance of a few general algorithms, including rigorous proofs where appropriate. However, the presentation is intended to be pedagogical, focusing on results that illustrate fundamental ideas, rather than obtaining the strongest or most general known theorems. The intended audience includes researchers and advanced graduate students in machine learning and statistics, interested in gaining a deeper understanding of the recent and ongoing developments in the theory of active learning.",
"We provide new results concerning label efficient, polynomial time, passive and active learning of linear separators. We prove that active learning provides an exponential improvement over PAC (passive) learning of homogeneous linear separators under nearly log-concave distributions. Building on this, we provide a computationally efficient PAC algorithm with optimal (up to a constant factor) sample complexity for such problems. This resolves an open question concerning the sample complexity of efficient PAC algorithms under the uniform distribution in the unit ball. Moreover, it provides the first bound for a polynomial-time PAC algorithm that is tight for an interesting infinite class of hypothesis functions under a general and natural class of data-distributions, providing significant progress towards a longstanding open question. We also provide new bounds for active and passive learning in the case that the data might not be linearly separable, both in the agnostic case and and under the Tsybakov low-noise condition. To derive our results, we provide new structural results for (nearly) log-concave distributions, which might be of independent interest as well."
]
} |
1406.5383 | 1588027929 | We present a simple noise-robust margin-based active learning algorithm to find homogeneous (passing the origin) linear separators and analyze its error convergence when labels are corrupted by noise. We show that when the imposed noise satisfies the Tsybakov low noise condition (Mammen, Tsybakov, and others 1999; Tsybakov 2004) the algorithm is able to adapt to unknown level of noise and achieves optimal statistical rate up to poly-logarithmic factors. We also derive lower bounds for margin based active learning algorithms under Tsybakov noise conditions (TNC) for the membership query synthesis scenario (Angluin 1988). Our result implies lower bounds for the stream based selective sampling scenario (Cohn 1990) under TNC for some fairly simple data distributions. Quite surprisingly, we show that the sample complexity cannot be improved even if the underlying data distribution is as simple as the uniform distribution on the unit ball. Our proof involves the construction of a well separated hypothesis set on the d-dimensional unit ball along with carefully designed label distributions for the Tsybakov noise condition. Our analysis might provide insights for other forms of lower bounds as well. | @cite_2 analyzed the minimax rate of active learning under the membership query synthesis model (cf. Section ). Their analysis implies a lower bound for stream-based setting when the data distribution is uniform or bounded from below (cf. Proposition and ). However, their analysis focuses on the nonparametric setting where the Bayes classifier @math is not assumed to have a parametric form such as linear. Consequently, their is a polynomial gap between their lower bound and the upper bound for linear classifiers. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2106447856"
],
"abstract": [
"This paper analyzes the potential advantages and theoretical challenges of \"active learning\" algorithms. Active learning involves sequential sampling procedures that use information gleaned from previous samples in order to focus the sampling and accelerate the learning process relative to \"passive learning\" algorithms, which are based on nonadaptive (usually random) samples. There are a number of empirical and theoretical results suggesting that in certain situations active learning can be significantly more effective than passive learning. However, the fact that active learning algorithms are feedback systems makes their theoretical analysis very challenging. This paper aims to shed light on achievable limits in active learning. Using minimax analysis techniques, we study the achievable rates of classification error convergence for broad classes of distributions characterized by decision boundary regularity and noise conditions. The results clearly indicate the conditions under which one can expect significant gains through active learning. Furthermore, we show that the learning rates derived are tight for \"boundary fragment\" classes in d-dimensional feature spaces when the feature marginal density is bounded from above and below."
]
} |
1406.5691 | 2952696374 | We present a first step towards a framework for defining and manipulating normative documents or contracts described as Contract-Oriented (C-O) Diagrams. These diagrams provide a visual representation for such texts, giving the possibility to express a signatory's obligations, permissions and prohibitions, with or without timing constraints, as well as the penalties resulting from the non-fulfilment of a contract. This work presents a CNL for verbalising C-O Diagrams, a web-based tool allowing editing in this CNL, and another for visualising and manipulating the diagrams interactively. We then show how these proof-of-concept tools can be used by applying them to a small example. | may be seen as a generalisation of @cite_0 @cite_13 @cite_11 in terms of expressivity. On the other hand, has three different formal semantics: an encoding into the @math -calculus, a trace semantics, and a Kripke-semantics. In a previous work, introduced a CNL for in the framework @cite_6 . allows for the verification of conflicts (contradictory obligations, permissions and prohibitions) in normative texts using the CLAN tool @cite_10 . The biggest difference between and the current work, besides the underlying logical formalism, is that we treat agents and actions as linguistic categories, and not as simple strings. This enables better agreement in the CNL which lends itself to more natural verbalisations, as well as making it easier to translate the CNL into other natural languages. We also introduce the special treatment of two-item co-ordination, and have a more general handling of lists as required by our more expressive target language. | {
"cite_N": [
"@cite_13",
"@cite_6",
"@cite_0",
"@cite_10",
"@cite_11"
],
"mid": [
"2558511176",
"2141573378",
"2109740450",
"",
"2150436779"
],
"abstract": [
"This paper presents a new version of the CL contract speci- fication language. CL combines deontic logic with propositional dynamic logic but it applies the modalities exclusively over structured actions. CL features synchronous actions, conflict relation, and an action nega- tion operation. The CL version that we present here is more expressive and has a cleaner semantics than its predecessor. We give a direct seman- tics for CL in terms of normative structures. We show that CL respects several desired properties from legal contracts and is decidable. We relate this semantics with a trace semantics of CL which we used for run-time monitoring contracts.",
"In this paper we are concerned with the analysis of normative conflicts, or the detection of conflicting obligations, permissions and prohibitions in normative texts written in a Controlled Natural Language (CNL). For this we present AnaCon, a proof-of-concept system where normative texts written in CNL are automatically translated into the formal language CL using the Grammatical Framework (GF). Such CL expressions are then analysed for normative conflicts by the CLAN tool, which gives counter-examples in cases where conflicts are found. The framework also uses GF to give a CNL version of the counter-example, helping the user to identify the conflicts in the original text. We detail the application of AnaCon to two case studies and discuss the effectiveness of our approach.",
"In this paper we propose a formal language for writing electronic contracts, based on the deontic notions of obligation, permission, and prohibition. We take an ought-to-do approach, where deontic operators are applied to actions instead of state-of-affairs. We propose an extension of the µ-calculus in order to capture the intuitive meaning of the deontic notions and to express concurrent actions. We provide a translation of the contract language into the logic, the semantics of which faithfully captures the meaning of obligation, permission and prohibition. We also show how our language captures most of the intuitive desirable properties of electronic contracts, as well as how it avoids most of the classical paradoxes of deontic logic. We finally show its applicability on a contract example.",
"",
"We present a dynamic deontic logic for specifying and reasoning about complex contracts. The concepts that our contract logic CL captures are drawn from legal contracts, as we consider that these are more general and expressive than what is usually found in computer science (like in software contracts, web services specifications, or communication protocols). CL is intended to be used in specifying complex contracts found in computer science. This influences many of the design decisio ns behind CL. We adopt an ought-to-do approach to deontic logic and apply the deontic modalities exclusively over complex actions. On top, we add the modalities of dynamic logic so to be able to reason about what happens after an action is performed. CL can reason about regular synchronous actions capturing the notion of actions done at the same time. CL incorporates the notions of contrary-to-duty and contrary-to-p rohibition by attaching to the deontic modalities explicitly a reparation which is to be en forced in case of violations. Results of decidability and tree model property are given as well as specific properties for the modalities."
]
} |
1406.5691 | 2952696374 | We present a first step towards a framework for defining and manipulating normative documents or contracts described as Contract-Oriented (C-O) Diagrams. These diagrams provide a visual representation for such texts, giving the possibility to express a signatory's obligations, permissions and prohibitions, with or without timing constraints, as well as the penalties resulting from the non-fulfilment of a contract. This work presents a CNL for verbalising C-O Diagrams, a web-based tool allowing editing in this CNL, and another for visualising and manipulating the diagrams interactively. We then show how these proof-of-concept tools can be used by applying them to a small example. | Attempto Controlled English (ACE) @cite_9 is a controlled natural language for universal domain-independent use. It comes with a parser to discourse representation structures and a first-order reasoner RACE @cite_4 . The biggest distinction here is that our language is specifically tailored for the description of normative texts, whereas ACE is generic. ACE also attempts to perform full sentence analysis, which is not necessary in our case since we are strictly limited to the semantic expressivity of the formalism. | {
"cite_N": [
"@cite_9",
"@cite_4"
],
"mid": [
"2423178067",
"2197103664"
],
"abstract": [
"Attempto Controlled English (ACE) is a language specifically designed to write specifications. ACE is a controlled natural language, i.e. a subset of English with a domain specific vocabulary and a restricted grammar in the form of a small set of construction and interpretation principles. This means that all ACE sentences are correct English, but that not all English sentences are allowed in ACE. The restriction of full natural language to a controlled subset is essential for ACE to be suitable for specification purposes. The main goals of this restriction are: to support the writing of precise specifications, to reduce ambiguity and vagueness inherent in full natural language, to encourage domain specialists to deliberately choose a clear and unambiguous writing style so that readers of a specification understand it in the same way as the writer, to make specifications computer processable, to render specifications unambiguously translatable into formal specification languages, particularly into first-order logic, and to make specifications executable. In brief, ACE allows domain specialists to express specifications in familiar natural language and to combine this with the rigor of formal specification languages. ACE has been used to specify a simple automatic teller machine, Kemmerer''s library data base problem, Schubert''s steamroller, and a number of smaller problems. Recently, ACE has also been used as the input language of a theorem prover, and first attempts have been made to interface it to a program synthesiser. Clearly, ACE can be adapted and extended for other purposes requiring precise input, e.g. writing technical documentation or updating databases. The use of ACE presupposes only basic knowledge of English grammar. Note however, that because ACE is a controlled natural language not all standard grammatical notions are directly applicable or suitable for the description of the ACE grammar. Some grammatical notions have a restricted meaning in ACE, while other notions are especially coined for an effective description of the language. The divergences are kept to a minimum so that ACE can be easily learned extending basic grammatical knowledge.",
"RACE is a first-order reasoner for Attempto Controlled English (ACE) that can show the (in-) consistency of a set of ACE axioms, prove ACE theorems from ACE axioms and answer ACE queries from ACE axioms. In each case RACE gives a proof justification in ACE and full English. This paper is a system description of RACE sketching its structure, its implementation, its operation and its user interface. The power and the limitations of RACE are demonstrated and discussed by concrete examples."
]
} |
1406.5105 | 1979299419 | We investigate the dynamic behavior of the stationary random process defined by a central complex Wishart matrix @math as it varies along a certain dimension @math . We characterize the second-order joint cumulative distribution function (cdf) of the largest eigenvalue, and the second-order joint cdf of the smallest eigenvalue of this matrix. We show that both cdfs can be expressed in exact closed-form in terms of a finite number of well-known special functions in the context of communication theory. As a direct application, we investigate the dynamic behavior of the parallel channels associated with multiple-input multiple-output (MIMO) systems in the presence of Rayleigh fading. Studying the complex random matrix that defines the MIMO channel, we characterize the second-order joint cdf of the signal-to-noise ratio (SNR) for the best and worst channels. We use these results to study the rate of change of MIMO parallel channels, using different performance metrics. For a given value of the MIMO channel correlation coefficient, we observe how the SNR associated with the best parallel channel changes slower than the SNR of the worst channel. This different dynamic behavior is much more appreciable when the number of transmit ( @math ) and receive ( @math ) antennas is similar. However, as @math is increased while keeping @math fixed, we see how the best and worst channels tend to have a similar rate of change. | Since the seminal work by Wishart @cite_5 , random matrix theory has found application in very diverse fields like physics @cite_40 , neuroscience @cite_27 and many others @cite_33 . For instance, random matrix processes are useful in econometrics to study the stock volatility in portfolio management @cite_7 @cite_41 ; in immunology, random matrix theory has been used to design immunogens targeted for rapidly mutating viruses @cite_26 . | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_7",
"@cite_41",
"@cite_40",
"@cite_27",
"@cite_5"
],
"mid": [
"2108823152",
"1813708725",
"1967238670",
"2084478869",
"1963596092",
"1991630689",
"1973335304"
],
"abstract": [
"Cellular immune control of HIV is mediated, in part, by induction of single amino acid mutations that reduce viral fitness, but compensatory mutations limit this effect. Here, we sought to determine if higher order constraints on viral evolution exist, because some coordinately linked combinations of mutations may hurt viability. Immune targeting of multiple sites in such a multidimensionally conserved region might render the virus particularly vulnerable, because viable escape pathways would be greatly restricted. We analyzed available HIV sequences using a method from physics to reveal distinct groups of amino acids whose mutations are collectively coordinated (“HIV sectors”). From the standpoint of mutations at individual sites, one such group in Gag is as conserved as other collectively coevolving groups of sites in Gag. However, it exhibits higher order conservation indicating constraints on the viability of viral strains with multiple mutations. Mapping amino acids from this group onto protein structures shows that combined mutations likely destabilize multiprotein structural interactions critical for viral function. Persons who durably control HIV without medications preferentially target the sector in Gag predicted to be most vulnerable. By sequencing circulating viruses from these individuals, we find that individual mutations occur with similar frequency in this sector as in other targeted Gag sectors. However, multiple mutations within this sector are very rare, indicating previously unrecognized multidimensional constraints on HIV evolution. Targeting such regions with higher order evolutionary constraints provides a novel approach to immunogen design for a vaccine against HIV and other rapidly mutating viruses.",
"I INTRODUCTION II PROPERTIES OF RANDOM MATRIX THEORY III APPLICATIONS OF RANDOM MATRIX THEORY",
"The Wishart Autoregressive (WAR) process is a dynamic model for time series of multivariate stochastic volatility. The WAR naturally accommodates the positivity and symmetry of volatility matrices and provides closed-form non-linear forecasts. The estimation of the WAR is straighforward, as it relies on standard methods such as the Method of Moments and Maximum Likelihood. For illustration, the WAR is applied to a sequence of intraday realized volatility-covolatility matrices from the Toronto Stock Market (TSX).",
"We study the realized variance of sample minimum variance portfolios of arbitrarily high dimension. We consider the use of covariance matrix estimators based on shrinkage and weighted sampling. For such improved portfolio implementations, the otherwise intractable problem of characterizing the realized variance is tackled here by analyzing the asymptotic convergence of the risk measure. Rather than relying on less insightful classical asymptotics, we manage to deliver results in a practically more meaningful limiting regime, where the number of assets remains comparable in magnitude to the sample size. Under this framework, we provide accurate estimates of the portfolio realized risk in terms of the model parameters and the underlying investment scenario, i.e., the unknown asset return covariance structure. In-sample approximations in terms of only the available data observations are known to considerably underestimate the realized portfolio risk. If not corrected, these deviations might lead in practice to inaccurate and overly optimistic investment decisions. Therefore, along with the asymptotic analysis, we also provide a generalized consistent estimator of the out-of-sample portfolio variance that only depends on the set of observed returns. Based on this estimator, the model free parameters, i.e., the sample weighting coefficients and the shrinkage intensity defining the minimum variance portfolio implementation, can be optimized so as to minimize the realized variance while taken into account the effect of estimation risk. Our results are based on recent contributions in the field of random matrix theory. Numerical simulations based on both synthetic and real market data validate our theoretical findings under a non-asymptotic, finite-dimensional setting. Finally, our proposed portfolio estimator is shown to consistently outperform a widely applied benchmark implementation.",
"Abstract We review the development of random-matrix theory (RMT) during the last fifteen years. We emphasize both the theoretical aspects, and the application of the theory to a number of fields. These comprise chaotic and disordered systems, the localization problem, many-body quantum systems, the Calogero-Sutherland model, chiral symmetry breaking in QCD, and quantum gravity in two dimensions. The review is preceded by a brief historical survey of the developments of RMT and of localization theory since their inception. We emphasize the concepts common to the above-mentioned fields as well as the great diversity of RMT. In view of the universality of RMT, we suggest that the current development signals the emergence of a new “statistical mechanics”: Stochasticity and general symmetry requirements lead to universal laws not based on dynamical principles.",
"Center for Neur obiology and Behavior , Columbia University ,,Colle ge of Physicians and Sur geons, New York, Ne w York 10032, USA(Receive d 18 July 2006; published 2 Nov ember 2006)The dynamics of neural networks is insuenced strongly by the spectrum of eigen values of the matrixdescribing their synaptic connectivity .In large networks, elements of the synaptic connecti vity matrix canbe chosen randomly from appropriate distributions, making results from random matrix theory highlyrele vant. Unfortunately , classic results on the eigen value spectra of random matrices do not apply tosynaptic connectivity matrices because of the constraint that indi vidual neurons are either ex citatory orinhibitory . Therefore, we compute eigen value spectra of large random matrices with ex citatory andinhibitory columns drawn from distributions with different means and equal or different variances.",
""
]
} |
1406.5105 | 1979299419 | We investigate the dynamic behavior of the stationary random process defined by a central complex Wishart matrix @math as it varies along a certain dimension @math . We characterize the second-order joint cumulative distribution function (cdf) of the largest eigenvalue, and the second-order joint cdf of the smallest eigenvalue of this matrix. We show that both cdfs can be expressed in exact closed-form in terms of a finite number of well-known special functions in the context of communication theory. As a direct application, we investigate the dynamic behavior of the parallel channels associated with multiple-input multiple-output (MIMO) systems in the presence of Rayleigh fading. Studying the complex random matrix that defines the MIMO channel, we characterize the second-order joint cdf of the signal-to-noise ratio (SNR) for the best and worst channels. We use these results to study the rate of change of MIMO parallel channels, using different performance metrics. For a given value of the MIMO channel correlation coefficient, we observe how the SNR associated with the best parallel channel changes slower than the SNR of the worst channel. This different dynamic behavior is much more appreciable when the number of transmit ( @math ) and receive ( @math ) antennas is similar. However, as @math is increased while keeping @math fixed, we see how the best and worst channels tend to have a similar rate of change. | The characterization of the eigenvalues of the matrix @math has been used to study the fundamental performance limits of MIMO systems @cite_8 @cite_36 ; specifically, the ordered eigenvalues of @math characterize the parallel eigenchannels used to achieve multiplexing gain, and, in particular the largest eigenvalue of @math determines the diversity gain of the system. | {
"cite_N": [
"@cite_36",
"@cite_8"
],
"mid": [
"2130509920",
"2133475491"
],
"abstract": [
"We investigate the use of multiple transmitting and or receiving antennas for single user communications over the additive Gaussian channel with and without fading. We derive formulas for the capacities and error exponents of such channels, and describe computational procedures to evaluate such formulas. We show that the potential gains of such multi-antenna systems over single-antenna systems is rather large under independenceassumptions for the fades and noises at different receiving antennas.",
"This paper addresses digital communication in a Rayleigh fading environment when the channel characteristic is unknown at the transmitter but is known (tracked) at the receiver. Inventing a codec architecture that can realize a significant portion of the great capacity promised by information theory is essential to a standout long-term position in highly competitive arenas like fixed and indoor wireless. Use (n T , n R ) to express the number of antenna elements at the transmitter and receiver. An (n, n) analysis shows that despite the n received waves interfering randomly, capacity grows linearly with n and is enormous. With n = 8 at 1 outage and 21-dB average SNR at each receiving element, 42 b s Hz is achieved. The capacity is more than 40 times that of a (1, 1) system at the same total radiated transmitter power and bandwidth. Moreover, in some applications, n could be much larger than 8. In striving for significant fractions of such huge capacities, the question arises: Can one construct an (n, n) system whose capacity scales linearly with n, using as building blocks n separately coded one-dimensional (1-D) subsystems of equal capacity? With the aim of leveraging the already highly developed 1-D codec technology, this paper reports just such an invention. In this new architecture, signals are layered in space and time as suggested by a tight capacity bound."
]
} |
1406.5105 | 1979299419 | We investigate the dynamic behavior of the stationary random process defined by a central complex Wishart matrix @math as it varies along a certain dimension @math . We characterize the second-order joint cumulative distribution function (cdf) of the largest eigenvalue, and the second-order joint cdf of the smallest eigenvalue of this matrix. We show that both cdfs can be expressed in exact closed-form in terms of a finite number of well-known special functions in the context of communication theory. As a direct application, we investigate the dynamic behavior of the parallel channels associated with multiple-input multiple-output (MIMO) systems in the presence of Rayleigh fading. Studying the complex random matrix that defines the MIMO channel, we characterize the second-order joint cdf of the signal-to-noise ratio (SNR) for the best and worst channels. We use these results to study the rate of change of MIMO parallel channels, using different performance metrics. For a given value of the MIMO channel correlation coefficient, we observe how the SNR associated with the best parallel channel changes slower than the SNR of the worst channel. This different dynamic behavior is much more appreciable when the number of transmit ( @math ) and receive ( @math ) antennas is similar. However, as @math is increased while keeping @math fixed, we see how the best and worst channels tend to have a similar rate of change. | When the entries of @math are distributed as complex Gaussian random variables, then @math is said to follow a complex Wishart (CW) distribution @cite_5 . The eigenvalue statistics of CW matrices have been studied in depth in the literature, both for central @cite_24 @cite_35 @cite_2 @cite_21 and non-central @cite_1 @cite_0 @cite_32 @cite_23 @cite_18 @cite_9 @cite_29 Wishart distributions. These results can be seen as a first-order characterization of a CW random process, and can be used to derive useful performance metrics such as the outage probability or the channel capacity. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_29",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_5"
],
"mid": [
"1974755392",
"2131570846",
"2964162019",
"1995012902",
"2132158614",
"2168792117",
"1991621115",
"1999057764",
"2144100131",
"2132291816",
"",
"1973335304"
],
"abstract": [
"Given a random matrix, what condition number should be expected? This paper presents a proof that for real or complex @math matrices with elements from a standard normal distribution, the expected value of the log of the 2-norm condition number is asymptotic to @math as @math . In fact, it is roughly @math for real matrices and @math for complex matrices as @math . The paper discusses how the distributions of the condition numbers behave for large n for real or complex and square or rectangular matrices. The exact distributions of the condition numbers of @math matrices are also given.Intimately related to this problem is the distribution of the eigenvalues of Wishart matrices. This paper studies in depth the largest and smallest eigenvalues, giving exact distributions in some cases. It also describes the behavior of all the eigenvalues, giving an exact formula for the expected characteristic polynomial.",
"Random matrices play a crucial role in the design and analysis of multiple-input multiple-output (MIMO) systems. In particular, performance of MIMO systems depends on the statistical properties of a subclass of random matrices known as Wishart when the propagation environment is characterized by Rayleigh or Rician fading. This paper focuses on the stochastic analysis of this class of matrices and proposes a general methodology to evaluate some multiple nested integrals of interest. With this methodology we obtain a closed-form expression for the joint probability density function of k consecutive ordered eigenvalues and, as a special case, the PDF of the lscrth ordered eigenvalue of Wishart matrices. The distribution of the largest eigenvalue can be used to analyze the performance of MIMO maximal ratio combining systems. The PDF of the smallest eigenvalue can be used for MIMO antenna selection techniques. Finally, the PDF the kth largest eigenvalue finds applications in the performance analysis of MIMO singular value decomposition systems.",
"Let W be a correlated complex non-central Wishart matrix defined through W=X^HX, where X is an nxm(n>=m) complex Gaussian with non-zero mean @U and non-trivial covariance @S. We derive exact expressions for the cumulative distribution functions (c.d.f.s) of the extreme eigenvalues (i.e., maximum and minimum) of W for some particular cases. These results are quite simple, involving rapidly converging infinite series, and apply for the practically important case where @U has rank one. We also derive analogous results for a certain class of gamma-Wishart random matrices, for which @U^H@U follows a matrix-variate gamma distribution. The eigenvalue distributions in this paper have various applications to wireless communication systems, and arise in other fields such as econometrics, statistical physics, and multivariate statistics.",
"We derive efficient recursive formulas giving the exact distribution of the largest eigenvalue for finite dimensional real Wishart matrices and for the Gaussian Orthogonal Ensemble (GOE). In comparing the exact distribution with the limiting distribution of large random matrices, we also found that the Tracy–Widom law can be approximated by a properly scaled and shifted gamma distribution, with great accuracy for the values of common interest in statistical applications.",
"This paper extends Khatri (1964, 1969) distribution of the largest eigenvalue of central complex Wishart matrices to the noncentral case. It then applies the resulting new statistical results to obtain closed-form expressions for the outage probability of multiple-input-multiple-output (MIMO) systems employing maximal ratio combining (known also as \"beamforming\" systems) and operating over Rician-fading channels. When applicable these expressions are compared with special cases previously reported in the literature dealing with the performance of (1) MIMO systems over Rayleigh-fading channels and (2) single-input-multiple-output (SIMO) systems over Rician-fading channels. As a double check these analytical results are validated by Monte Carlo simulations and as an illustration of the mathematical formalism some numerical examples for particular cases of interest are plotted and discussed. These results show that, given a fixed number of total antenna elements and under the same scattering condition (1) SIMO systems are equivalent to multiple-input-single-output systems and (2) it is preferable to distribute the number of antenna elements evenly between the transmitter and the receiver for a minimum outage probability performance.",
"This paper analyzes MIMO systems with multichannel beamforming in Ricean fading. Our results apply to a wide class of multichannel systems which transmit on the eigenmodes of the MIMO channel. We first present new closed-form expressions for the marginal ordered eigenvalue distributions of complex noncentral Wishart matrices. These are used to characterize the statistics of the signal to noise ratio (SNR) on each eigenmode. Based on this, we present exact symbol error rate (SER) expressions. We also derive closed-form expressions for the diversity order, array gain, and outage probability. We show that the global SER performance is dominated by the subchannel corresponding to the minimum channel singular value. We also show that, at low outage levels, the outage probability varies inversely with the Ricean A*-factor for cases where transmission is only on the most dominant subchannel (i.e. a singlechannel beamforming system). Numerical results are presented to validate the theoretical analysis.",
"To evaluate the unitary integrals, such as the well-known Harish-Chandra-Itzykson-Zuber integral, character expansions were developed by Balantekin, where the matrix integrand is a group member; i.e., a square matrix with a nonzero determinant. Recently, this method has been exploited to derive the joint eigenvalue distributions of the Wishart matrices; i.e., HH* where H is the complex Gaussian random channel matrix of a multiple-input multiple-output (MIMO) system. The joint eigenvalue distributions are used to calculate the moment generating function of the mutual information (ergodic capacity) of a MIMO channel. In this paper, we show that the previous integration framework presented in the literature is not correct, and results in incorrect joint eigenvalue distributions for the Ricean and full-correlated Rayleigh MIMO channels. We develop a new framework to apply the character expansions for integrations over the unitary group, involving general rectangular complex matrices in the integrand. We derive the correct distribution functions and use them to obtain the capacity of the Ricean and correlated Rayleigh MIMO systems in a unified and straightforward approach. The integration technique proposed in this paper is general enough to be used for other unitary integrals in engineering, mathematics, and physics.",
"",
"This paper characterizes the eigenvalue distributions of full-rank Hermitian matrices generated from a set of independent (non)zero-mean proper complex Gaussian random vectors with a scaled-identity covariance matrix. More specifically, the joint and marginal cumulative distribution function (CDF) of any subset of unordered eigenvalues of the so-called complex (non)central Wishart matrices, as well as new simple and tractable expressions for their joint probability density function (PDF), are derived in terms of a finite sum of determinants. As corollaries to these new results, explicit expressions for the statistics of the smallest and largest eigenvalues, of (non)central Wishart matrices, can be easily obtained. Moreover, capitalizing on the foregoing distributions, it becomes possible to evaluate exactly the mean, variance, and other higher order statistics such as the skewness and kurtosis of the random channel capacity, in the case of uncorrelated multiple-input multiple-output (MIMO) Ricean and Rayleigh fading channels. Doing so bridges the gap between Telatar's initial approach for evaluating the average MIMO channel capacity (Telatar, 1999), and the subsequently widely adopted moment generating function (MGF) approach, thereby setting the basis for a PDF-based framework for characterizing the capacity statistics of MIMO Ricean and Rayleigh fading channels.",
"In this paper, we present a general formulation that unifies the probabilistic characterization of Hermitian random matrices with a specific structure. Based on a general expression for the joint pdf of the ordered eigenvalues, we obtain i) the joint cdf; ii) the marginal cdfs; and iii) the marginal pdfs of the ordered eigenvalues, where ii) and iii) follow as simple particularizations of i). Our formulation is shown to include the distribution of some common MIMO channel models such as the uncorrelated, semicorrelated, and double-correlated Rayleigh MIMO fading channel and the uncorrelated Rician MIMO fading channel, although it is not restricted only to these. Hence, the proposed formulation and derived results provide a solid framework for the simultaneous analytical performance analysis of MIMO systems under different channel models. As an illustrative application, we obtain the exact outage probability of a spatial multiplexing MIMO system transmitting through the strongest channel eigenmodes.",
"",
""
]
} |
1406.5105 | 1979299419 | We investigate the dynamic behavior of the stationary random process defined by a central complex Wishart matrix @math as it varies along a certain dimension @math . We characterize the second-order joint cumulative distribution function (cdf) of the largest eigenvalue, and the second-order joint cdf of the smallest eigenvalue of this matrix. We show that both cdfs can be expressed in exact closed-form in terms of a finite number of well-known special functions in the context of communication theory. As a direct application, we investigate the dynamic behavior of the parallel channels associated with multiple-input multiple-output (MIMO) systems in the presence of Rayleigh fading. Studying the complex random matrix that defines the MIMO channel, we characterize the second-order joint cdf of the signal-to-noise ratio (SNR) for the best and worst channels. We use these results to study the rate of change of MIMO parallel channels, using different performance metrics. For a given value of the MIMO channel correlation coefficient, we observe how the SNR associated with the best parallel channel changes slower than the SNR of the worst channel. This different dynamic behavior is much more appreciable when the number of transmit ( @math ) and receive ( @math ) antennas is similar. However, as @math is increased while keeping @math fixed, we see how the best and worst channels tend to have a similar rate of change. | If we consider two samples of a stationary CW random process @math , namely @math and @math , the dynamics of @math are captured by the joint distribution of @math and @math . More precisely, the dynamics of the MIMO parallel channels (or ) can be studied separately by studying the joint distribution of the eigenvalues of @math and @math . Along these lines, the statistical analysis of CW matrices was tackled in @cite_12 @cite_37 , deriving the @math -dimensional joint pdf of the @math eigenvalues of a CW matrix and a perturbed version of it. Rather than the , we consider the of a particular eigenvalue as our metric to capture the dynamic behavior of a CW random process, since this distribution allows for the separate statistical characterization of all @math eigenvalues. Therefore, we will focus our attention on this set of second-order (or bivariate) distributions. | {
"cite_N": [
"@cite_37",
"@cite_12"
],
"mid": [
"2098213935",
"2099298993"
],
"abstract": [
"In this letter, the joint probability density function (PDF) for the eigenvalues of a complex Wishart matrix and a perturbed version of it are derived. The latter version can be used to model channel estimation errors and variations over time or frequency. As an example, the joint PDF is used to calculate the transition probabilities between modulation states in an adaptive MIMO system. This leads to a Markov model for the system. We then use the model to investigate the modulation state entering rates (MSER), the average stay duration (ASD), and the effects of feedback delay on the accuracy of modulation state selection in mobile radio systems. Other applications of this PDF are also discussed.",
"Let A(t) be a complex Wishart process defined in terms of the MxN complex Gaussian matrix X(t) by A(t)=X(t)X(t)^H. The covariance matrix of the columns of X(t) is @S. If X(t), the underlying Gaussian process, is a correlated process over time, then we have dependence between samples of the Wishart process. In this paper, we study the joint statistics of the Wishart process at two points in time, t\"1, t\"2, where t\"1"
]
} |
1406.5105 | 1979299419 | We investigate the dynamic behavior of the stationary random process defined by a central complex Wishart matrix @math as it varies along a certain dimension @math . We characterize the second-order joint cumulative distribution function (cdf) of the largest eigenvalue, and the second-order joint cdf of the smallest eigenvalue of this matrix. We show that both cdfs can be expressed in exact closed-form in terms of a finite number of well-known special functions in the context of communication theory. As a direct application, we investigate the dynamic behavior of the parallel channels associated with multiple-input multiple-output (MIMO) systems in the presence of Rayleigh fading. Studying the complex random matrix that defines the MIMO channel, we characterize the second-order joint cdf of the signal-to-noise ratio (SNR) for the best and worst channels. We use these results to study the rate of change of MIMO parallel channels, using different performance metrics. For a given value of the MIMO channel correlation coefficient, we observe how the SNR associated with the best parallel channel changes slower than the SNR of the worst channel. This different dynamic behavior is much more appreciable when the number of transmit ( @math ) and receive ( @math ) antennas is similar. However, as @math is increased while keeping @math fixed, we see how the best and worst channels tend to have a similar rate of change. | This problem was addressed in @cite_10 when studying the mutual information distribution in orthogonal frequency division multiplexing (OFDM) systems operating under frequency-selective MIMO channels; specifically, a closed-form expression for the joint second-order pdf was given for arbitrarily-selected eigenvalues of the equivalent frequency-domain Wishart matrix. However, in order to obtain the joint bivariate cdf or the correlation coefficient for a particular eigenvalue, a two-fold numerical integration with infinite limits was required. In @cite_22 , an expression for this bivariate cdf was derived for the extreme eigenvalues (i.e. the largest and the smallest) in terms of the determinant of a matrix whose entries are expressed as infinite series of products of incomplete gamma functions; hence, its evaluation is highly impractical as the number of antennas is increased. | {
"cite_N": [
"@cite_10",
"@cite_22"
],
"mid": [
"2153637161",
"2135963243"
],
"abstract": [
"This communication considers the distribution of the mutual information of frequency-selective spatially uncorrelated Rayleigh fading multiple-input-multiple-output (MIMO) channels. Results are presented for orthogonal frequency-division multiplexing (OFDM)-based spatial multiplexing. New exact closed-form expressions are derived for the variance of the mutual information. In contrast to previous results, our new expressions apply for systems with both arbitrary numbers of antennas and arbitrary-length channels. Simplified expressions are also presented for high and low signal-to-noise ratio (SNR) regimes. The analytical variance results are used to provide accurate analytical approximations for the distribution of the mutual information, and the outage capacity.",
"In this paper, we consider an adaptive modulation system with multiple-input-multiple-output (MIMO) antennas in conjunction with orthogonal frequency-division multiplexing (OFDM) operating over frequency-selective Rayleigh fading environments. In particular, we consider a type of beamforming with a maximum ratio transmission maximum ratio combining (MRT-MRC) transceiver structure. For this system, we derive a central limit theorem for various block-based performance metrics. This motivates an accurate Gaussian approximation to the system data rate and the number of outages per OFDM block. In addition to the data rate and outage distributions, we also consider the subcarrier signal-to-noise ratio (SNR) as a process in the frequency domain and compute level crossing rates (LCRs) and average fade bandwidths (AFBs). Hence, we provide fundamental but novel results for the MIMO OFDM channel. The accuracy of these results is verified by Monte Carlo simulations, and applications to performance analysis and system design are discussed."
]
} |
1406.5268 | 2294510069 | We study the statistics of Dirichlet eigenvalues of the random Schr "odinger operator @math , with @math the discrete Laplacian on @math and @math uniformly bounded independent random variables, on sets of the form @math for @math bounded, open and with a smooth boundary. If @math holds for some bounded and continuous @math , we show that, as @math , the @math -th eigenvalue converges to the @math -th Dirichlet eigenvalue of the homogenized operator @math , where @math is the continuum Dirichlet Laplacian on @math . Assuming further that @math for some positive and continuous @math , we establish a multivariate central limit theorem for simple eigenvalues centered by their expectation. The limiting covariance for a given pair of simple eigenvalues is expressed as an integral of @math against the product of squares of the corresponding eigenfunctions of @math . | As alluded to earlier, a result closely related to ours has been derived by Bal @cite_17 . There the operator of the form @math in @math with Dirichlet boundary condition is studied, where @math is a random centered stationary field. Note that this can naturally be regarded as a spatially scaled version of our model. (Bal in fact studied the more general situation where @math is replaced by a pseudo differential operator.) In dimensions @math and under the assumptions that either @math is bounded and has an integrable correlation function, or @math and a mixing condition holds ([H2] on page 683 of @cite_17 ), it is proved in Section 5.2 that the @math -th smallest eigenvalue @math of @math has Gaussian fluctuations around @math with @math , provided this eigenvalue is simple. This is slightly different from our result, which shows a CLT around the expectation. In the case @math , we know that @math by combining the result of Bal with ours, but we do not know how to prove this directly. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2962804622"
],
"abstract": [
"We consider the perturbation of elliptic pseudodifferential operators @math with more than square integrable Green's functions by random, rapidly varying, sufficiently mixing, potentials of the form @math . We analyze the source and spectral problems associated with such operators and show that the rescaled difference between the perturbed and unperturbed solutions may be written asymptotically as @math as explicit Gaussian processes. Such results may be seen as central limit corrections to homogenization (law of large numbers). Similar results are derived for more general elliptic equations with random coefficients in one dimension of space. The results are based on the availability of a rapidly converging integral formulation for the perturbed solutions and on the use of classical central limit results for random processes with appropriate mixing conditions."
]
} |
1406.5268 | 2294510069 | We study the statistics of Dirichlet eigenvalues of the random Schr "odinger operator @math , with @math the discrete Laplacian on @math and @math uniformly bounded independent random variables, on sets of the form @math for @math bounded, open and with a smooth boundary. If @math holds for some bounded and continuous @math , we show that, as @math , the @math -th eigenvalue converges to the @math -th Dirichlet eigenvalue of the homogenized operator @math , where @math is the continuum Dirichlet Laplacian on @math . Assuming further that @math for some positive and continuous @math , we establish a multivariate central limit theorem for simple eigenvalues centered by their expectation. The limiting covariance for a given pair of simple eigenvalues is expressed as an integral of @math against the product of squares of the corresponding eigenfunctions of @math . | The argument in @cite_17 is based on a perturbation expansion of the resolvent operator and an explicit representation of the leading-order correction to the eigenfunctions; cf. Remark . In order to control the remainder terms, one then needs that the Green function of the homogenized operator is square integrable, and this requires the restriction to @math . The method employed in the present article is different in it avoids having to deal with local perturbations altogether. Incidentally, as was recently shown by Gu and Mourrat @cite_33 , for the random elliptic operators (see Subsection below for a formulation) the limit laws of the local and global fluctuations to eigenfunctions are in fact not even the same. | {
"cite_N": [
"@cite_33",
"@cite_17"
],
"mid": [
"1565615755",
"2962804622"
],
"abstract": [
"We investigate the global fluctuations of solutions to elliptic equations with random coefficients in the discrete setting. In dimension @math and for i.i.d. coefficients, we show that after a suitable scaling, these fluctuations converge to a Gaussian field that locally resembles a (generalized) Gaussian free field. The paper begins with a heuristic derivation of the result, which can be read independently and was obtained jointly with Scott Armstrong.",
"We consider the perturbation of elliptic pseudodifferential operators @math with more than square integrable Green's functions by random, rapidly varying, sufficiently mixing, potentials of the form @math . We analyze the source and spectral problems associated with such operators and show that the rescaled difference between the perturbed and unperturbed solutions may be written asymptotically as @math as explicit Gaussian processes. Such results may be seen as central limit corrections to homogenization (law of large numbers). Similar results are derived for more general elliptic equations with random coefficients in one dimension of space. The results are based on the availability of a rapidly converging integral formulation for the perturbed solutions and on the use of classical central limit results for random processes with appropriate mixing conditions."
]
} |
1406.5268 | 2294510069 | We study the statistics of Dirichlet eigenvalues of the random Schr "odinger operator @math , with @math the discrete Laplacian on @math and @math uniformly bounded independent random variables, on sets of the form @math for @math bounded, open and with a smooth boundary. If @math holds for some bounded and continuous @math , we show that, as @math , the @math -th eigenvalue converges to the @math -th Dirichlet eigenvalue of the homogenized operator @math , where @math is the continuum Dirichlet Laplacian on @math . Assuming further that @math for some positive and continuous @math , we establish a multivariate central limit theorem for simple eigenvalues centered by their expectation. The limiting covariance for a given pair of simple eigenvalues is expressed as an integral of @math against the product of squares of the corresponding eigenfunctions of @math . | To the best of our knowledge, the fluctuations of @math for independent and identically distributed conductances have not been studied yet. Notwithstanding, the analysis of a related effective conductance problem (Nolen @cite_8 , Rossignol @cite_18 , Biskup, Salvi and Wolff @cite_16 ) indicates that @math should be asymptotically normal with mean zero and variance that is a biquadratic expression in @math integrated over @math , where @math denotes a @math -th eigenfunction of the operator @math . A significant additional technical challenge of this problem is the need to employ the corrector method (this is what gives rise to the homogenized'' coefficients @math above). | {
"cite_N": [
"@cite_18",
"@cite_16",
"@cite_8"
],
"mid": [
"2247236687",
"2007694160",
"2092871579"
],
"abstract": [
"We investigate the (generalized) Walsh decomposition of point-to-point effective resistances on countable random electric networks with i.i.d. resistances. We show that it is concentrated on low levels, and thus point-to-point effective resistances are uniformly stable to noise. For graphs that satisfy some homogeneity property, we show in addition that it is concentrated on sets of small diameter. As a consequence, we compute the right order of the variance and prove a central limit theorem for the effective resistance through the discrete torus of side length n in Zd, when n goes to infinity.",
"Given a resistor network on ( Z ^d ) with nearest-neighbor conductances, the effective conductance in a finite set with a given boundary condition is the minimum of the Dirichlet energy over functions with the prescribed boundary values. For shift-ergodic conductances, linear (Dirichlet) boundary conditions and square boxes, the effective conductance scaled by the volume of the box converges to a deterministic limit as the box-size tends to infinity. Here we prove that, for i.i.d. conductances with a small ellipticity contrast, also a (non-degenerate) central limit theorem holds. The proof is based on the corrector method and the Martingale Central Limit Theorem; a key integrability condition is furnished by the Meyers estimate. More general domains, boundary conditions and ellipticity contrasts will be addressed in a subsequent paper.",
"We consider solutions of an elliptic partial differential equation in ( R ^d ) with a stationary, random conductivity coefficient that is also periodic with period (L ). Boundary conditions on a square domain of width (L ) are arranged so that the solution has a macroscopic unit gradient. We then consider the average flux that results from this imposed boundary condition. It is known that in the limit (L ), this quantity converges to a deterministic constant, almost surely. Our main result is that the law of this random variable is very close to that of a normal random variable, if the domain size (L ) is large. We quantify this approximation by an error estimate in total variation. The error estimate relies on a second order Poincare inequality developed recently by Chatterjee."
]
} |
1406.5268 | 2294510069 | We study the statistics of Dirichlet eigenvalues of the random Schr "odinger operator @math , with @math the discrete Laplacian on @math and @math uniformly bounded independent random variables, on sets of the form @math for @math bounded, open and with a smooth boundary. If @math holds for some bounded and continuous @math , we show that, as @math , the @math -th eigenvalue converges to the @math -th Dirichlet eigenvalue of the homogenized operator @math , where @math is the continuum Dirichlet Laplacian on @math . Assuming further that @math for some positive and continuous @math , we establish a multivariate central limit theorem for simple eigenvalues centered by their expectation. The limiting covariance for a given pair of simple eigenvalues is expressed as an integral of @math against the product of squares of the corresponding eigenfunctions of @math . | Another way to look at Anderson localization is by analyzing the limiting spectral statistics for operators in an increasing sequence of finite volumes. In the localized regime, the statistics is expected to be given by a Poisson point process. This has so far been proved in the bulk'' (i.e., the interior) of the spectrum (Molchanov @cite_31 in @math and Minami @cite_2 for general @math ) . At spectral edges there seem to be only partial results for bounded potentials at this time (Germinet and Klopp @cite_12 @cite_34 ) although a somewhat more complete theory has been developed for some unbounded potentials (Astrauskas @cite_20 @cite_3 , Biskup and K "onig @cite_14 ). In the delocalization regime, the spectral statistics is expected to be that seen in random matrix ensembles. | {
"cite_N": [
"@cite_14",
"@cite_3",
"@cite_2",
"@cite_31",
"@cite_34",
"@cite_12",
"@cite_20"
],
"mid": [
"1895809589",
"2026417557",
"",
"2125370545",
"2019358407",
"2005820649",
""
],
"abstract": [
"We consider random Schrodinger operators of the form ( + ), where ( ) is the lattice Laplacian on ( Z ^ d ) and ( ) is an i.i.d. random field, and study the extreme order statistics of the Dirichlet eigenvalues for this operator restricted to large but finite subsets of ( Z ^ d ). We show that, for ( ) with a doubly-exponential type of upper tail, the upper extreme order statistics of the eigenvalues falls into the Gumbel max-order class, and the corresponding eigenfunctions are exponentially localized in regions where ( ) takes large, and properly arranged, values. The picture we prove is thus closely connected with the phenomenon of Anderson localization at the spectral edge. Notwithstanding, our approach is largely independent of existing methods for proofs of Anderson localization and it is based on studying individual eigenvalue eigenfunction pairs and characterizing the regions where the leading eigenfunctions put most of their mass.",
"We consider the spectral problem for the random Schrodinger operator on the multidimensional lattice torus increasing to the whole of lattice, with an i.i.d. potential (Anderson Hamiltonian). We obtain the explicit almost sure asymptotic expansion formulas for the extreme eigenvalues and eigenfunctions in the intermediate rank case, provided the upper distributional tails of potential decay at infinity slower than the double exponential function. For the fractional-exponential tails (including Weibull’s and Gaussian distributions), extremal type limit theorems for eigenvalues are proved, and the strong influence of parameters of the model on a specification of normalizing constants is described. In the proof we use the finite-rank perturbation arguments based on the cluster expansion for resolvents.",
"",
"Let (H_V = - d^ 2 dt^ 2 + q(t, ) ) be an one-dimensional random Schrodinger operator in ℒ2(−V,V) with the classical boundary conditions. The random potentialq(t, ω) has a formq(t, ω)=F(x t ), wherex t is a Brownian motion on the compact Riemannian manifoldK andF:K→R1 is a smooth Morse function, ( F = 0 ). Let (N_V ( ) = Ei(V) 1 ), where Δ∈(0, ∞),E i (V) are the eigenvalues ofH V . The main result (Theorem 1) of this paper is the following. IfV→∞,E0>0,k∈Z+ anda>0 (a is a fixed constant) then @math wheren(E0) is a limit state density ofH V ,V→∞. This theorem mean that there is no repulsion between energy levels of the operatorH V ,V→∞.",
"We consider the discrete Anderson model and prove enhanced Wegner and Minami estimates where the interval length is replaced by the IDS computed on the interval. We use these estimates to improve on the description of finite volume eigenvalues and eigenfunctions obtained in Germinet and Klopp (J Eur Math Soc http: arxiv.org abs 1011.1832, 2010). As a consequence of the improved description of eigenvalues and eigenfunctions, we revisit a number of results on the spectral statistics in the localized regime obtained in Germinet and Klopp (J Eur Math Soc http: arxiv.org abs 1011.1832, 2010) and Klopp (PTRF http: fr.arxiv.org abs 1012.0831, 2010) and extend their domain of validity, namely: the local spectral statistics for the unfolded eigenvalues; the local asymptotic ergodicity of the unfolded eigenvalues. In dimension 1, for the standard Anderson model, the improvement enables us to obtain the local spectral statistics at band edge, that is in the Lifshitz tail regime. In higher dimensions, this works for modified Anderson models.",
"We study various statistics related to the eigenvalues and eigenfunctions of random Hamiltonians in the localized regime. Consider a random Hamiltonian at an energy @math in the localized phase. Assume the density of states function is not too flat near @math . Restrict it to some large cube @math . Consider now @math , a small energy interval centered at @math that asymptotically contains infintely many eigenvalues when the volume of the cube @math grows to infinity. We prove that, with probability one in the large volume limit, the eigenvalues of the random Hamiltonian restricted to the cube inside the interval are given by independent identically distributed random variables, up to an error of size an arbitrary power of the volume of the cube. As a consequence, we derive * uniform Poisson behavior of the locally unfolded eigenvalues, * a.s. Poisson behavior of the joint distibutions of the unfolded energies and unfolded localization centers in a large range of scales. * the distribution of the unfolded level spacings, locally and globally, * the distribution of the unfolded localization centers, locally and globally.",
""
]
} |
1406.4216 | 2950596073 | Person re-identification is an important technique towards automatic search of a person's presence in a surveillance video. Two fundamental problems are critical for person re-identification, feature representation and metric learning. An effective feature representation should be robust to illumination and viewpoint changes, and a discriminant metric should be learned to match various person images. In this paper, we propose an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA). The LOMO feature analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. Besides, to handle illumination variations, we apply the Retinex transform and a scale invariant texture operator. To learn a discriminant metric, we propose to learn a discriminant low dimensional subspace by cross-view quadratic discriminant analysis, and simultaneously, a QDA metric is learned on the derived subspace. We also present a practical computation method for XQDA, as well as its regularization. Experiments on four challenging person re-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show that the proposed method improves the state-of-the-art rank-1 identification rates by 2.2 , 4.88 , 28.91 , and 31.55 on the four databases, respectively. | Many existing person re-identification approaches try to build a robust feature representation which is both distinctive and robust for describing a person's appearance under various conditions @cite_17 @cite_13 @cite_24 @cite_20 @cite_18 @cite_6 . Gray and Tao @cite_17 proposed to use AdaBoost to select good features out of a set of color and texture features. @cite_38 proposed the Symmetry-Driven Accumulation of Local Features (SDALF) method, where the symmetry and asymmetry property is considered to handle viewpoint variations. @cite_39 turned local descriptors into the Fisher Vector to produce a global representation of an image. @cite_6 utilized the Pictorial Structures where part-based color information and color displacement were considered for person re-identification. Recently, saliency information has been investigated for person re-identification @cite_35 @cite_8 @cite_42 , leading to a novel feature representation. In @cite_28 , a method called regionlets is proposed, which picks a maximum bin from three random regions for object detection under deformation. In contrast, we propose to maximize the occurrence of each local pattern among all horizontal sub-windows to tackle viewpoint changes. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_35",
"@cite_8",
"@cite_28",
"@cite_42",
"@cite_6",
"@cite_39",
"@cite_24",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"1979260620",
"2096306138",
"",
"",
"2110226160",
"",
"",
"",
"",
"2156911635",
"2107475454",
"1518138188"
],
"abstract": [
"In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances.",
"In this work we develop appearance models for computing the similarity between image regions containing deformable objects of a given class in realtime. We introduce the concept of shape and appearance context. The main idea is to model the spatial distribution of the appearance relative to each of the object parts. Estimating the model entails computing occurrence matrices. We introduce a generalization of the integral image and integral histogram frameworks, and prove that it can be used to dramatically speed up occurrence computation. We demonstrate the ability of this framework to recognize an individual walking across a network of cameras. Finally, we show that the proposed approach outperforms several other methods.",
"",
"",
"Generic object detection is confronted by dealing with different degrees of variations in distinct object classes with tractable computations, which demands for descriptive and flexible object representations that are also efficient to evaluate for many locations. In view of this, we propose to model an object class by a cascaded boosting classifier which integrates various types of features from competing local regions, named as region lets. A region let is a base feature extraction region defined proportionally to a detection window at an arbitrary resolution (i.e. size and aspect ratio). These region lets are organized in small groups with stable relative positions to delineate fine grained spatial layouts inside objects. Their features are aggregated to a one-dimensional feature within one group so as to tolerate deformations. Then we evaluate the object bounding box proposal in selective search from segmentation cues, limiting the evaluation locations to thousands. Our approach significantly outperforms the state-of-the-art on popular multi-class detection benchmark datasets with a single method, without any contexts. It achieves the detection mean average precision of 41.7 on the PASCAL VOC 2007 dataset and 39.7 on the VOC 2010 for 20 object categories. It achieves 14.7 mean average precision on the Image Net dataset for 200 object categories, outperforming the latest deformable part-based model (DPM) by 4.7 .",
"",
"",
"",
"",
"Recently, methods with learning procedure have been widely used to solve person re-identification (re-id) problem. However, most existing databases for re-id are smallscale, therefore, over-fitting is likely to occur. To further improve the performance, we propose a novel method by fusing multiple local features and exploring their structural information on different levels. The proposed method is called Structural Constraints Enhanced Feature Accumulation (SCEFA). Three local features (i.e., Hierarchical Weighted Histograms (HWH), Gabor Ternary Pattern HSV (GTP-HSV), Maximally Stable Color Regions (MSCR)) are used. Structural information of these features are deeply explored in three levels: pixel, blob, and part. The matching algorithms corresponding to the features are also discussed. Extensive experiments conducted on three datasets: VIPeR, ETHZ and our own challenging dataset MCSSH, show that our approach outperforms stat-of-the-art methods significantly.",
"We present and evaluate a person re-identification scheme for multi-camera surveillance system. Our approach uses matching of signatures based on interest-points descriptors collected on short video sequences. One of the originalities of our method is to accumulate interest points on several sufficiently time-spaced images during person tracking within each camera, in order to capture appearance variability. A first experimental evaluation conducted on a publicly available set of low-resolution videos in a commercial mall shows very promising inter-camera person re-identification performances (a precision of 82 for a recall of 78 ). It should also be noted that our matching method is very fast: 1 8s for re-identification of one target person among 10 previously seen persons, and a logarithmic dependence with the number of stored person models, making re- identification among hundreds of persons computationally feasible in less than 1 5 second.",
"Viewpoint invariant pedestrian recognition is an important yet under-addressed problem in computer vision. This is likely due to the difficulty in matching two objects with unknown viewpoint and pose. This paper presents a method of performing viewpoint invariant pedestrian recognition using an efficiently and intelligently designed object representation, the ensemble of localized features (ELF). Instead of designing a specific feature by hand to solve the problem, we define a feature space using our intuition about the problem and let a machine learning algorithm find the best representation. We show how both an object class specific representation and a discriminative recognition model can be learned using the AdaBoost algorithm. This approach allows many different kinds of simple features to be combined into a single similarity function. The method is evaluated using a viewpoint invariant pedestrian recognition dataset and the results are shown to be superior to all previous benchmarks for both recognition and reacquisition of pedestrians."
]
} |
1406.4216 | 2950596073 | Person re-identification is an important technique towards automatic search of a person's presence in a surveillance video. Two fundamental problems are critical for person re-identification, feature representation and metric learning. An effective feature representation should be robust to illumination and viewpoint changes, and a discriminant metric should be learned to match various person images. In this paper, we propose an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA). The LOMO feature analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. Besides, to handle illumination variations, we apply the Retinex transform and a scale invariant texture operator. To learn a discriminant metric, we propose to learn a discriminant low dimensional subspace by cross-view quadratic discriminant analysis, and simultaneously, a QDA metric is learned on the derived subspace. We also present a practical computation method for XQDA, as well as its regularization. Experiments on four challenging person re-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show that the proposed method improves the state-of-the-art rank-1 identification rates by 2.2 , 4.88 , 28.91 , and 31.55 on the four databases, respectively. | Besides robust features, metric learning has been widely applied for person re-identification @cite_15 @cite_2 @cite_34 @cite_3 @cite_44 @cite_33 @cite_37 @cite_25 . @cite_44 proposed the PRDC algorithm, which optimizes the relative distance comparison. @cite_37 proposed to relax the PSD constraint required in Mahalanobis metric learning, and obtained a simplified formulation that still showed promising performance. Li at al. @cite_25 proposed the learning of Locally-Adaptive Decision Functions (LADF) for person verification, which can be viewed as a joint model of a distance metric and a locally adapted thresholding rule. @cite_47 formulated the person re-identification problem as a ranking problem, and applied the RankSVM to learn a subspace. In @cite_49 , local experts were considered to learn a common feature space for person re-identification across views. | {
"cite_N": [
"@cite_37",
"@cite_47",
"@cite_33",
"@cite_3",
"@cite_44",
"@cite_49",
"@cite_2",
"@cite_15",
"@cite_34",
"@cite_25"
],
"mid": [
"",
"",
"2068042582",
"2156854584",
"1991452654",
"2047632871",
"2169495281",
"",
"",
""
],
"abstract": [
"",
"",
"In this paper, we raise important issues on scalability and the required degree of supervision of existing Mahalanobis metric learning methods. Often rather tedious optimization procedures are applied that become computationally intractable on a large scale. Further, if one considers the constantly growing amount of data it is often infeasible to specify fully supervised labels for all data points. Instead, it is easier to specify labels in form of equivalence constraints. We introduce a simple though effective strategy to learn a distance metric from equivalence constraints, based on a statistical inference perspective. In contrast to existing methods we do not rely on complex optimization problems requiring computationally expensive iterations. Hence, our method is orders of magnitudes faster than comparable methods. Results on a variety of challenging benchmarks with rather diverse nature demonstrate the power of our method. These include faces in unconstrained environments, matching before unseen object instances and person re-identification across spatially disjoint cameras. In the latter two benchmarks we clearly outperform the state-of-the-art.",
"This paper presents a new method for viewpoint invariant pedestrian recognition problem. We use a metric learning framework to obtain a robust metric for large margin nearest neighbor classification with rejection (i.e., classifier will return no matches if all neighbors are beyond a certain distance). The rejection condition necessitates the use of a uniform threshold for a maximum allowed distance for deeming a pair of images a match. In order to handle the rejection case, we propose a novel cost similar to the Large Margin Nearest Neighbor (LMNN) method and call our approach Large Margin Nearest Neighbor with Rejection (LMNN-R). Our method is able to achieve significant improvement over previously reported results on the standard Viewpoint Invariant Pedestrian Recognition (VIPeR [1]) dataset.",
"Matching people across non-overlapping camera views, known as person re-identification, is challenging due to the lack of spatial and temporal constraints and large visual appearance changes caused by variations in view angle, lighting, background clutter and occlusion. To address these challenges, most previous approaches aim to extract visual features that are both distinctive and stable under appearance changes. However, most visual features and their combinations under realistic conditions are neither stable nor distinctive thus should not be used indiscriminately. In this paper, we propose to formulate person re-identification as a distance learning problem, which aims to learn the optimal distance that can maximises matching accuracy regardless the choice of representation. To that end, we introduce a novel Probabilistic Relative Distance Comparison (PRDC) model, which differs from most existing distance learning methods in that, rather than minimising intra-class variation whilst maximising intra-class variation, it aims to maximise the probability of a pair of true match having a smaller distance than that of a wrong match pair. This makes our model more tolerant to appearance changes and less susceptible to model over-fitting. Extensive experiments are carried out to demonstrate that 1) by formulating the person re-identification problem as a distance learning problem, notable improvement on matching accuracy can be obtained against conventional person re-identification techniques, which is particularly significant when the training sample size is small; and 2) our PRDC outperforms not only existing distance learning methods but also alternative learning methods based on boosting and learning to rank.",
"In this paper, we propose a new approach for matching images observed in different camera views with complex cross-view transforms and apply it to person re-identification. It jointly partitions the image spaces of two camera views into different configurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are first locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsity-inducing norm and information theoretical regularization. This approach can be generalized to the settings where test images are from new camera views, not the same as those in the training set. Extensive experiments are conducted on public datasets and our own dataset. Comparisons with the state-of-the-art metric learning and person re-identification methods show the superior performance of our approach.",
"In this paper, we present an information-theoretic approach to learning a Mahalanobis distance function. We formulate the problem as that of minimizing the differential relative entropy between two multivariate Gaussians under constraints on the distance function. We express this problem as a particular Bregman optimization problem---that of minimizing the LogDet divergence subject to linear constraints. Our resulting algorithm has several advantages over existing methods. First, our method can handle a wide variety of constraints and can optionally incorporate a prior on the distance function. Second, it is fast and scalable. Unlike most existing methods, no eigenvalue computations or semi-definite programming are required. We also present an online version and derive regret bounds for the resulting algorithm. Finally, we evaluate our method on a recent error reporting system for software called Clarify, in the context of metric learning for nearest neighbor classification, as well as on standard data sets.",
"",
"",
""
]
} |
1406.4296 | 2168930216 | Learning object detectors requires massive amounts of labeled training samples from the specific data source of interest. This is impractical when dealing with many different sources (e.g., in camera networks), or constantly changing ones such as mobile cameras (e.g., in robotics or driving assistant systems). In this paper, we address the problem of self-learning detectors in an autonomous manner, i.e. (i) detectors continuously updating themselves to efficiently adapt to streaming data sources (contrary to transductive algorithms), (ii) without any labeled data strongly related to the target data stream (contrary to self-paced learning), and (iii) without manual intervention to set and update hyper-parameters. To that end, we propose an unsupervised, on-line, and self-tuning learning algorithm to optimize a multi-task learning convex objective. Our method uses confident but laconic oracles (high-precision but low-recall off-the-shelf generic detectors), and exploits the structure of the problem to jointly learn on-line an ensemble of instance-level trackers, from which we derive an adapted category-level object detector. Our approach is validated on real-world publicly available video object datasets. | Unsupervised domain adaptation approaches use annotated data from a fixed dataset (the source set) and only data from the target dataset. They often require one or more passes over a large pool of annotated samples in order to adapt the model to each new target dataset, which makes them impractical for continuous adaptation to streaming data. For instance, Taskar al @cite_16 leverage unseen'' features from instances classified with high confidence, learning which features are useful for classification, and Gong al @cite_21 reshape'' datasets to minimize their distribution mismatch. These methods assume that the target domain is stationary, with many unlabeled target samples readily available at each update (transductive setting @cite_6 ). Consequently, these approaches are not suited to non-stationary streaming scenarios, such as when using on-board cameras ( for autonomous driving or robotics). | {
"cite_N": [
"@cite_16",
"@cite_21",
"@cite_6"
],
"mid": [
"2134125014",
"2157989183",
"2107008379"
],
"abstract": [
"This paper addresses the problem of classification in situations where the data distribution is not homogeneous: Data instances might come from different locations or times, and therefore are sampled from related but different distributions. In particular, features may appear in some parts of the data that are rarely or never seen in others. In most situations with nonhomogeneous data, the training data is not representative of the distribution under which the classifier must operate. We propose a method, based on probabilistic graphical models, for utilizing unseen features during classification. Our method introduces, for each such unseen feature, a continuous hidden variable describing its influence on the class -- whether it tends to be associated with some label. We then use probabilistic inference over the test data to infer a distribution over the value of this hidden variable. Intuitively, we \"learn\" the role of this unseen feature from the test set, generalizing from those instances whose label we are fairly sure about. Our overall probabilistic model is learned from the training data. In particular, we also learn models for characterizing the role of unseen features; these models use \"meta-features\" of those features, such as words in the neighborhood of an unseen feature, to infer its role. We present results for this framework on the task of classifying news articles and web pages, showing significant improvements over models that do not use unseen features.",
"In visual recognition problems, the common data distribution mismatches between training and testing make domain adaptation essential. However, image data is difficult to manually divide into the discrete domains required by adaptation algorithms, and the standard practice of equating datasets with domains is a weak proxy for all the real conditions that alter the statistics in complex ways (lighting, pose, background, resolution, etc.) We propose an approach to automatically discover latent domains in image or video datasets. Our formulation imposes two key properties on domains: maximum distinctiveness and maximum learnability. By maximum distinctiveness, we require the underlying distributions of the identified domains to be different from each other to the maximum extent; by maximum learnability, we ensure that a strong discriminative model can be learned from the domain. We devise a nonparametric formulation and efficient optimization procedure that can successfully discover domains among both training and test data. We extensively evaluate our approach on object recognition and human activity recognition tasks.",
""
]
} |
1406.4296 | 2168930216 | Learning object detectors requires massive amounts of labeled training samples from the specific data source of interest. This is impractical when dealing with many different sources (e.g., in camera networks), or constantly changing ones such as mobile cameras (e.g., in robotics or driving assistant systems). In this paper, we address the problem of self-learning detectors in an autonomous manner, i.e. (i) detectors continuously updating themselves to efficiently adapt to streaming data sources (contrary to transductive algorithms), (ii) without any labeled data strongly related to the target data stream (contrary to self-paced learning), and (iii) without manual intervention to set and update hyper-parameters. To that end, we propose an unsupervised, on-line, and self-tuning learning algorithm to optimize a multi-task learning convex objective. Our method uses confident but laconic oracles (high-precision but low-recall off-the-shelf generic detectors), and exploits the structure of the problem to jointly learn on-line an ensemble of instance-level trackers, from which we derive an adapted category-level object detector. Our approach is validated on real-world publicly available video object datasets. | Most related to our work are methods exploiting the spatio-temporal structure of videos in order to collect target data samples. Prest al @cite_10 learn object detectors by relying on joint motion segmentation of a set of videos containing the object of interest moving differently from the background. Their goal is to learn generic detectors that adapt well from video to image data. Tang al @cite_11 are inspired by the self-paced learning approach of Kumar al @cite_7 , learn easy things first'', which is designed for fine-tuning on target domains related to a labeled source one. The transductive approach of Tang al @cite_11 iteratively re-weights labeled source samples by using the tracks of the closest target samples. Sharma al @cite_8 propose a similarly inspired multiple instance learning algorithm, also relying on self-paced learning and off-line iterative re-training of a generic detector. These approaches can only be applied in stationary transductive settings, and do not allow for efficient model adaptation along a particular video stream. | {
"cite_N": [
"@cite_8",
"@cite_10",
"@cite_7",
"@cite_11"
],
"mid": [
"",
"1973054923",
"2132984949",
"2133434696"
],
"abstract": [
"",
"Object detectors are typically trained on a large set of still images annotated by bounding-boxes. This paper introduces an approach for learning object detectors from real-world web videos known only to contain objects of a target class. We propose a fully automatic pipeline that localizes objects in a set of videos of the class and learns a detector for it. The approach extracts candidate spatio-temporal tubes based on motion segmentation and then selects one tube per video jointly over all videos. To compare to the state of the art, we test our detector on still images, i.e., Pascal VOC 2007. We observe that frames extracted from web videos can differ significantly in terms of quality to still images taken by a good camera. Thus, we formulate the learning from videos as a domain adaptation task. We show that training from a combination of weakly annotated videos and fully annotated still images using domain adaptation improves the performance of a detector trained from still images alone.",
"Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that often we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition.",
"Typical object detectors trained on images perform poorly on video, as there is a clear distinction in domain between the two types of data. In this paper, we tackle the problem of adapting object detectors learned from images to work well on videos. We treat the problem as one of unsupervised domain adaptation, in which we are given labeled data from the source domain (image), but only unlabeled data from the target domain (video). Our approach, self-paced domain adaptation, seeks to iteratively adapt the detector by re-training the detector with automatically discovered target domain examples, starting with the easiest first. At each iteration, the algorithm adapts by considering an increased number of target domain examples, and a decreased number of source domain examples. To discover target domain examples from the vast amount of video data, we introduce a simple, robust approach that scores trajectory tracks instead of bounding boxes. We also show how rich and expressive features specific to the target domain can be incorporated under the same framework. We show promising results on the 2011 TRECVID Multimedia Event Detection [1] and LabelMe Video [2] datasets that illustrate the benefit of our approach to adapt object detectors to video."
]
} |
1406.4296 | 2168930216 | Learning object detectors requires massive amounts of labeled training samples from the specific data source of interest. This is impractical when dealing with many different sources (e.g., in camera networks), or constantly changing ones such as mobile cameras (e.g., in robotics or driving assistant systems). In this paper, we address the problem of self-learning detectors in an autonomous manner, i.e. (i) detectors continuously updating themselves to efficiently adapt to streaming data sources (contrary to transductive algorithms), (ii) without any labeled data strongly related to the target data stream (contrary to self-paced learning), and (iii) without manual intervention to set and update hyper-parameters. To that end, we propose an unsupervised, on-line, and self-tuning learning algorithm to optimize a multi-task learning convex objective. Our method uses confident but laconic oracles (high-precision but low-recall off-the-shelf generic detectors), and exploits the structure of the problem to jointly learn on-line an ensemble of instance-level trackers, from which we derive an adapted category-level object detector. Our approach is validated on real-world publicly available video object datasets. | Several tracking-by-detection methods are also related to our work, in particular the tracking-learning-detection approach of Kalal al @cite_18 and the multi-task learning approach of Zhang al @cite_9 . However, they are designed to work on short sequences, and their goal is to learn only instance-specific models, whereas our aim is to learn and adapt ones. Note that we also differ from the family of multi-class transfer learning methods aiming to share features across categories, such as the learning to borrow'' approach of Lim al @cite_0 . These approaches are different in intent from our continuous category-level adaptation to a new data stream. Furthermore, they are not straightforwardly applicable for object detection with one or few categories of interest ( cars and pedestrians). | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_18"
],
"mid": [
"2145276819",
"2102674365",
""
],
"abstract": [
"Despite the recent trend of increasingly large datasets for object detection, there still exist many classes with few training examples. To overcome this lack of training data for certain classes, we propose a novel way of augmenting the training data for each class by borrowing and transforming examples from other classes. Our model learns which training instances from other classes to borrow and how to transform the borrowed examples so that they become more similar to instances from the target class. Our experimental results demonstrate that our new object detector, with borrowed and transformed examples, improves upon the current state-of-the-art detector on the challenging SUN09 object detection dataset.",
"In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing @math mixed norms @math and @math we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular @math tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259---2272, 2011) is a special case of our MTT formulation (denoted as the @math tracker) when @math Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers.",
""
]
} |
1406.4277 | 2049581867 | Constructions of optimal locally repairable codes (LRCs) in the case of (r + 1) l n and over small finite fields were stated as open problems for LRCs in [I. , “Optimal locally repairable codes and connections to matroid theory”, 2013 IEEE ISIT]. In this paper, these problems are studied by constructing almost optimal linear LRCs, which are proven to be optimal for certain parameters, including cases for which (r + 1) l n. More precisely, linear codes for given length, dimension, and all-symbol locality are constructed with almost optimal minimum distance. ‘Almost optimal’ refers to the fact that their minimum distance differs by at most one from the optimal value given by a known bound for LRCs. In addition to these linear LRCs, optimal LRCs which do not require a large field are constructed for certain classes of parameters. | As mentioned above, in the all-symbol locality case the information theoretic trade-off between locality and code distance for any (linear or nonlinear) code was derived in @cite_2 . Furthermore, constructions of optimal LRCs for the case when @math and over small finite fields when @math is large were stated as open problems for LRCs in @cite_9 . In @cite_9 it was proved that there exists an optimal LRC for parameters @math over a field @math if @math divides @math and @math with @math large enough. In @cite_7 and @cite_3 the existence of optimal LRCs was proved for several parameters @math . Good codes with the weaker assumption of information symbol locality are designed in @cite_5 . In @cite_4 it was shown that there exist parameters @math for linear LRCs for which the bound of Eq. is not achievable. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_2",
"@cite_5"
],
"mid": [
"1993830711",
"2155418912",
"2006551665",
"2595628957",
"1995147907",
"1996042140"
],
"abstract": [
"Consider a linear [n,k,d]q code C. We say that the ith coordinate of C has locality r , if the value at this coordinate can be recovered from accessing some other r coordinates of C. Data storage applications require codes with small redundancy, low locality for information coordinates, large distance, and low locality for parity coordinates. In this paper, we carry out an in-depth study of the relations between these parameters. We establish a tight bound for the redundancy n-k in terms of the message length, the distance, and the locality of information coordinates. We refer to codes attaining the bound as optimal. We prove some structure theorems about optimal codes, which are particularly strong for small distances. This gives a fairly complete picture of the tradeoffs between codewords length, worst case distance, and locality of information symbols. We then consider the locality of parity check symbols and erasure correction beyond worst case distance for optimal codes. Using our structure theorem, we obtain a tight bound for the locality of parity symbols possible in such codes for a broad class of parameter settings. We prove that there is a tradeoff between having good locality and the ability to correct erasures beyond the minimum distance.",
"Linear erasure codes with local repairability are desirable for distributed data storage systems. An [n,k,d] linear code having all-symbol (r,δ)-locality, denoted as (r,δ)_a, is considered optimal if it has the actual highest minimum distance of any code of the given parameters n,k,r and δ. A minimum distance bound is given in . The existing results on the existence and the construction of optimal (r, δ)_a linear codes are limited to only two small regions within this special case, namely, i) m=0 and ii) m≥ (v+δ-1)>(δ-1) and δ=2, where m=n mod(r+δ-1) and v=k mod r. This paper investigates the properties and existence conditions for optimal (r,δ)_a linear codes with general r and δ. First, a structure theorem is derived for general optimal (r,δ)_a codes which helps illuminate some of their structure properties. Next, the entire problem space with arbitrary n, k, r and δ is divided into eight different cases (regions) with regard to the specific relations of these parameters. For two cases, it is rigorously proved that no (r,δ)_a linear code can achieve the minimum distance bound in . For four other cases the optimal (r,δ)_a codes are shown to exist over a field of size q≥( n k-1 ), deterministic constructions are proposed. Our new constructive algorithms not only cover more cases, but for the same cases where previous algorithms exist, the new constructions require a smaller field, which translates to potentially lower computational complexity. Our findings substantially enriches the knowledge on optimal (r,δ)_a linear codes, leaving only two cases in which the construction of optimal codes are not yet known.",
"Petabyte-scale distributed storage systems are currently transitioning to erasure codes to achieve higher storage efficiency. Classical codes like Reed-Solomon are highly suboptimal for distributed environments due to their high overhead in single-failure events. Locally Repairable Codes (LRCs) form a new family of codes that are repair efficient. In particular, LRCs minimize the number of nodes participating in single node repairs during which they generate small network traffic. Two large-scale distributed storage systems have already implemented different types of LRCs: Windows Azure Storage and the Hadoop Distributed File System RAID used by Facebook. The fundamental bounds for LRCs, namely the best possible distance for a given code locality, were recently discovered, but few explicit constructions exist. In this work, we present an explicit and simple to implement construction of optimal LRCs, for code parameters previously established by existence results. For the analysis of the optimality of our code, we derive a new result on the matroid represented by the code's generator matrix.",
"",
"One main challenge in the design of distributed storage codes is the Exact Repair Problem: if a node storing encoded information fails, to maintain the same level of reliability, we need to exactly regenerate what was lost in a new node. A major open problem in this area has been the design of codes that i) admit exact and low cost repair of nodes and ii) have arbitrarily high data rates. In this paper, we are interested in the metric of repair locality, which corresponds to the the number of disk accesses required during a node repair. Under this metric we characterize an information theoretic trade-off that binds together locality, code distance, and storage cost per node. We introduce Locally repairable codes (LRCs) which are shown to achieve this tradeoff. The achievability proof uses a “locality aware” flow graph gadget which leads to a randomized code construction. We then present the first explicit construction of LRCs that can achieve arbitrarily high data-rates.",
"We design flexible schemes to explore the tradeoffs between storage space and access efficiency in reliable data storage systems. Aiming at this goal, two new classes of erasure-resilient codes are introduced -- Basic Pyramid Codes (BPC) and Generalized Pyramid Codes (GPC). Both schemes require slightly more storage space than conventional schemes, but significantly improve the critical performance of read during failures and unavailability. As a by-product, we establish a necessary matching condition to characterize the limit of failure recovery, that is, unless the matching condition is satisfied, a failure case is impossible to recover. In addition, we define a maximally recoverable (MR) property. For all ERC schemes holding the MR property, the matching condition becomes sufficient, that is, all failure cases satisfying the matching condition are indeed recoverable. We show that GPC is the first class of non-MDS schemes holding the MR property."
]
} |
1406.4444 | 2503965723 | Person re-identification (re-id), an emerging problem in visual surveillance, deals with maintaining entities of individuals whilst they traverse various locations surveilled by a camera network. From a visual perspective re-id is challenging due to significant changes in visual appearance of individuals in cameras with different pose, illumination and calibration. Globally the challenge arises from the need to maintain structurally consistent matches among all the individual entities across different camera views. We propose PRISM, a structured matching method to jointly account for these challenges. We view the global problem as a weighted graph matching problem and estimate edge weights by learning to predict them based on the co-occurrences of visual patterns in the training examples. These co-occurrence based scores in turn account for appearance changes by inferring likely and unlikely visual co-occurrences appearing in training instances. We implement PRISM on single shot and multi-shot scenarios. PRISM uniformly outperforms state-of-the-art in terms of matching rate while being computationally efficient. | Structured learning has been also used in the object tracking literature ( @cite_60 ) for data association. The biggest difference, however, between our method and these tracking methods is that in our cases, we do not have any temporal or location information with data, in general, which leads to totally different goals: our method aims to find the correct matches among the entities using structured matching in testing based on only the appearance information, while in tracking the algorithms aim to associate the same object with small appearance variations in two adjacent frames locally. | {
"cite_N": [
"@cite_60"
],
"mid": [
"2098941887"
],
"abstract": [
"Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (accurate estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we are able to avoid the need for an intermediate classification step. Our method uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow for real-time application, we introduce a budgeting mechanism which prevents the unbounded growth in the number of support vectors which would otherwise occur during tracking. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased performance."
]
} |
1406.4582 | 2123979496 | Persistent homology computes topological invariants from point cloud data. Recent work has focused on developing statistical methods for data analysis in this framework. We show that, in certain models, parametric inference can be performed using statistics defined on the computed invariants. We develop this idea with a model from population genetics, the coalescent with recombination. We apply our model to an influenza dataset, identifying two scales of topological structure which have a distinct biological interpretation. | The application of persistent homology to genomic data was first introduced in @cite_14 , where recombination rates in viral populations were estimated by computing @math -norms on barcode diagrams. The statistical properties of random simplicial complexes, including distributions over their Betti numbers, has been studied in @cite_7 @cite_0 . The persistent homology of Gaussian random fields and other probabilistic structures has been studied in @cite_10 . Functions defined on the persistence diagram were used to compute a fractal dimension for various polymer physics models in @cite_9 . | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_9",
"@cite_0",
"@cite_10"
],
"mid": [
"2107385756",
"2120672285",
"2963192955",
"1588476105",
"1680861744"
],
"abstract": [
"The tree structure is currently the accepted paradigm to represent evolutionary relationships between organisms, species or other taxa. However, horizontal, or reticulate, genomic exchanges are pervasive in nature and confound characterization of phylogenetic trees. Drawing from algebraic topology, we present a unique evolutionary framework that comprehensively captures both clonal and reticulate evolution. We show that whereas clonal evolution can be summarized as a tree, reticulate evolution exhibits nontrivial topology of dimension greater than zero. Our method effectively characterizes clonal evolution, reassortment, and recombination in RNA viruses. Beyond detecting reticulate evolution, we succinctly recapitulate the history of complex genetic exchanges involving more than two parental strains, such as the triple reassortment of H7N9 avian influenza and the formation of circulating HIV-1 recombinants. In addition, we identify recurrent, large-scale patterns of reticulate evolution, including frequent PB2-PB1-PA-NP cosegregation during avian influenza reassortment. Finally, we bound the rate of reticulate events (i.e., 20 reassortments per year in avian influenza). Our method provides an evolutionary perspective that not only captures reticulate events precluding phylogeny, but also indicates the evolutionary scales where phylogenetic inference could be accurate.",
"We study the expected topological properties of Cech and Vietoris–Rips complexes built on random points in ℝd . We find higher-dimensional analogues of known results for connectivity and component counts for random geometric graphs. However, higher homology H k is not monotone when k>0. In particular, for every k>0, we exhibit two thresholds, one where homology passes from vanishing to nonvanishing, and another where it passes back to vanishing. We give asymptotic formulas for the expectation of the Betti numbers in the sparser regimes, and bounds in the denser regimes.",
"We propose a measure of shape which is appropriate for the study of a complicated geometric structure, defined using the topology of neighborhoods of the structure. One aspect of this measure gives a new notion of fractal dimension. We demonstrate the utility and computability of this measure by applying it to branched polymers, Brownian trees, and self-avoiding random walks.",
"This expository article is based on a lecture from the Stanford Symposium on Algebraic Topology: Application and New Directions, held in honor of Gunnar Carlsson, Ralph Cohen, and Ib Madsen.",
"We discuss and review recent developments in the area of applied algebraic topology, such as persistent homology and barcodes. In particular, we discuss how these are related to understanding more about manifold learning from random point cloud data, the algebraic structure of simplicial complexes determined by random vertices and, in most detail, the algebraic topology of the excursion sets of random elds."
]
} |
1406.4692 | 2950800443 | Verification activities are necessary to ensure that the requirements are specified in a correct way. However, until now requirements verification research has focused on traditional up-front requirements. Agile or just-in-time requirements are by definition incomplete, not specific and might be ambiguous when initially specified, indicating a different notion of 'correctness'. We analyze how verification of agile requirements quality should be performed, based on literature of traditional and agile requirements. This leads to an agile quality framework, instantiated for the specific requirement types of feature requests in open source projects and user stories in agile projects. We have performed an initial qualitative validation of our framework for feature requests with eight practitioners from the Dutch agile community, receiving overall positive feedback. | Scacchi @cite_21 argues that requirements validation is a by-product, rather than an explicit goal, of how open source software (OSS) requirements are constituted, described, discussed, cross-referenced, and hyperlinked to other informal descriptions of a system and its implementations. From his study it appears that OSS requirements artifacts might be assessed in terms of virtues like 1) encouragement of community building; 2) freedom of expression and multiplicity of expression; 3) readability and ease of navigation; 4) and implicit versus explicit structures for organizing, storing and sharing OSS requirements. Virtue 3) and 4) above are covered in our framework, whereas virtue 1) and 2) should be achieved by a correct setup of the open source project (allow everyone to report feature requests and provide good means and an open atmosphere for discussing them). | {
"cite_N": [
"@cite_21"
],
"mid": [
"2147925798"
],
"abstract": [
"This study presents findings from an empirical study directed at understanding the roles, forms, and consequences arising in requirements for open source software (OSS) development efforts. Five open source software development communities are described, examined, and compared to help discover what differences may be observed. At least two dozen kinds of software informalisms are found to play a critical role in the elicitation, analysis, specification, validation, and management of requirements for developing OSS systems. Subsequently, understanding the roles these software informalisms take in a new formulation of the requirements development process for OSS is the focus of this study. This focus enables considering a reformulation of the requirements engineering process and its associated artifacts or (in)formalisms to better account for the requirements when developing OSS systems. Other findings identify how OSS requirements are decentralized across multiple informalisms, and to the need for advances in how to specify the capabilities of existing OSS systems."
]
} |
1406.3225 | 90539580 | Multimodality can make (especially mobile) device interaction more efficient. Sensors and communication capabilities of modern smartphones and tablets lay the technical basis for its implementation. Still, mobile platforms do not make multimodal interaction support trivial. Building multimodal applications requires various APIs with different paradigms, high-level interpretation of contextual data, and a method for fusing individual inputs and outputs. To reduce this effort, we created a framework that simplifies and accelerates the creation of multimodal applications for prototyping and research. It provides an abstraction of information representations in different modalities, unifies access to implicit and explicit information, and wires together the logic behind context-sensitive modality switches. In the paper, we present the structure and features of our framework, and validate it by four implemented demonstrations of different complexity. | For rapid development of context-sensitive applications in the research context, several toolkits and frameworks have been presented, starting with the by @cite_1 . The toolkit by Fogarty and Hudson @cite_5 supports the development of applications that use sensor-based statistical models. Thereby, contextual information can be used to adapt certain settings automatically. With our framework, we focus on wiring of functionality, as often the impacting factors for a desired action are quite clear. There exist alternative approaches (e.g., support vector machines, Bayesian or neural networks), but machine learning often makes it hard to understand a certain action has been performed. Some toolkits address particular use cases, e.g. physical mobile interaction @cite_19 or proxemic interaction @cite_10 . Du and Wang @cite_16 developed a model and implementation framework to simulate context events with Symbian phones. used JCOP (Context-Oriented Programming) extensions to the Java language that support context-dependent execution for Android programming @cite_14 , which, however, complicates a quick integration in any existing Android project and working environment. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_19",
"@cite_5",
"@cite_16",
"@cite_10"
],
"mid": [
"2006015625",
"2163419627",
"656949909",
"2079397285",
"2090764929",
"2132168989"
],
"abstract": [
"The behavior of mobile applications is particularly affected by their execution context, such as location and state a the mobile device. Among other approaches, context-oriented programming can help to achieve context-dependent behavior without sacrificing modularity or adhering to a certain framework or library by enabling fine-grained adaptation of default behavior per control-flow. However, context information relevant for mobile applications is mostly defined by external events and sensor data rather than by code and control flow. To accommodate this, the JCop language provides a more declarative approach by pointcut-like adaptation rules. In this paper, we explain how we applied JCop to the development of Android applications for which we extended the language semantics for static contexts and modified the compiler. Additionally, we outline the successful implementation of a simple, proof-of-concept mobile application using our approach and report on promising early evaluation results.",
"Computing devices and applications are now used beyond the desktop, in diverse environments, and this trend toward ubiquitous computing is accelerating. One challenge that remains in this emerging research field is the ability to enhance the behavior of any application by informing it of the context of its use. By context, we refer to any information that characterizes a situation related to the interaction between humans, applications, and the surrounding environment. Context-aware applications promise richer and easier interaction, but the current state of research in this field is still far removed from that vision. This is due to 3 main problems: (a) the notion of context is still ill defined, (b) there is a lack of conceptual models and methods to help drive the design of context-aware applications, and (c) no tools are available to jump-start the development of context-aware applications. In this anchor article, we address these 3 problems in turn. We first define context, identify categories of contextual information, and characterize context-aware application behavior. Though the full impact of context-aware computing requires understanding very subtle and high-level notions of context, we are focusing our efforts on the pieces of context that can be inferred automatically from sensors in a physical environment. We then present a conceptual framework that separates the acquisition and representation of context from the delivery and reaction to context by a context-aware application. We have built a toolkit, the Context Toolkit, that instantiates this conceptual framework and supports the rapid development of a rich space of context-aware applications. We illustrate the usefulness of the conceptual framework by describing a number of context-aware applications that have been prototyped using the Context Toolkit. We also demonstrate how such a framework can support the investigation of important research challenges in the area of context-aware computing.",
"Mobile interactions with the real world, meaning a person using her mobile device as a mediator for the interaction with smart objects, is becoming more and more popular in industry and academia. Typical technologies supporting this kind of interactions are Radio Frequency Identification (RFID), Near Field Communication (NFC), visual marker recognition, or Bluetooth. Currently, there is only very little tool support for developing systems based on this kind of mobile interactions. In this paper we motivate such tool support, discuss its requirements and present the architecture and implementation of the Physical Mobile Interaction Framework (PMIF). This framework comprises several components that support different implementations of the interaction techniques touching, pointing, scanning and user-mediated object selection. In addition to that PMIF also abstracts from specific techniques and technologies and provides a generic framework for the uniform integration and simple use of the supported interaction techniques Furthermore we discuss seven prototypes that were implemented using PMIF which show the maturity of the framework.",
"Sensor based statistical models promise to support a variety of advances in human computer interaction, but building applications that use them is currently difficult and potential advances go unexplored. We present Subtle, a toolkit that removes some of the obstacles to developing and deploying applications using sensor based statistical models of human situations. Subtle provides an appropriate and extensible sensing library, continuous learning of personalized models, fully automated high level feature generation, and support for using learned models in deployed applications. By removing obstacles to developing and deploying sensor based statistical models, Subtle makes it easier to explore the design space surrounding sensor based statistical models of human situations. Subtle thus helps to move the focus of human computer interaction research onto applications and datasets, instead of the difficulties of developing and deploying sensor based statistical models.",
"This research aims at facilitating the development of context-aware application software for mobile devices by providing a programming model, an implementation framework and a development environment. The programming model provides a multi-layered software architecture for context-aware application programming. The model supports developers to define contexts, behaviors and context-behavior binding rules through specifications and automates generation of context-aware application code based on the specifications. The implementation framework is a backbone program that implements the programming model. It facilitates the development in reducing the effort on the common tasks of context-awareness and help developers focus on the application-specific components. The development environment provides a series of tools to support the development of context-aware applications. These tools simplify the development process and provide the developed applications with robustness and testability.",
"People naturally understand and use proxemic relationships (e.g., their distance and orientation towards others) in everyday situations. However, only few ubiquitous computing (ubicomp) systems interpret such proxemic relationships to mediate interaction (proxemic interaction). A technical problem is that developers find it challenging and tedious to access proxemic information from sensors. Our Proximity Toolkit solves this problem. It simplifies the exploration of interaction techniques by supplying fine-grained proxemic information between people, portable devices, large interactive surfaces, and other non-digital objects in a room-sized environment. The toolkit offers three key features. 1) It facilitates rapid prototyping of proxemic-aware systems by supplying developers with the orientation, distance, motion, identity, and location information between entities. 2) It includes various tools, such as a visual monitoring tool, that allows developers to visually observe, record and explore proxemic relationships in 3D space. (3) Its flexible architecture separates sensing hardware from the proxemic data model derived from these sensors, which means that a variety of sensing technologies can be substituted or combined to derive proxemic information. We illustrate the versatility of the toolkit with proxemic-aware systems built by students."
]
} |
1406.3225 | 90539580 | Multimodality can make (especially mobile) device interaction more efficient. Sensors and communication capabilities of modern smartphones and tablets lay the technical basis for its implementation. Still, mobile platforms do not make multimodal interaction support trivial. Building multimodal applications requires various APIs with different paradigms, high-level interpretation of contextual data, and a method for fusing individual inputs and outputs. To reduce this effort, we created a framework that simplifies and accelerates the creation of multimodal applications for prototyping and research. It provides an abstraction of information representations in different modalities, unifies access to implicit and explicit information, and wires together the logic behind context-sensitive modality switches. In the paper, we present the structure and features of our framework, and validate it by four implemented demonstrations of different complexity. | Preceding toolkits and frameworks for multimodality that have been presented in research are often focused and confined to specialized use cases, e.g., speech and gesture interaction with large screens @cite_8 , or multimodal interaction on a PC @cite_17 @cite_6 . The generalizability of such approaches is thus limited. The evaluation of multimodal behavior is conducted by e.g. finite automata @cite_21 or state machines @cite_9 . The latter approach enhances the Java Swing toolkit to facilitate novel input methods (e.g., multi-handed or pressure-sensitive input). @cite_15 showed a server-based task distribution and coordination approach, which however entails that the systems using it are not fully autonomous. @cite_29 present a multimodal web toolkit, however focusing on browser-based applications. These have limited capabilities compared to native applications, e.g., with relation to hardware and sensor access on mobile devices. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_29",
"@cite_6",
"@cite_15",
"@cite_17"
],
"mid": [
"2124144842",
"2121624795",
"2025678693",
"29843086",
"",
"2144007439",
"101429486"
],
"abstract": [
"This paper presents a framework for designing a natural multimodal human computer interaction (HCI) system. The core of the proposed framework is a principled method for combining information derived from audio and visual cues. To achieve natural interaction, both audio and visual modalities are fused along with feedback through a large screen display. Careful design along with due considerations of possible aspects of a systems interaction cycle and integration has resulted in a successful system. The performance of the proposed framework has been validated through the development of several prototype systems as well as commercial applications for the retail and entertainment industry. To assess the impact of these multimodal systems (MMS), informal studies have been conducted. It was found that the system performed according to its specifications in 95 of the cases and that users showed ad-hoc proficiency, indicating natural acceptance of such systems.",
"This article describes SwingStates, a Java toolkit designed to facilitate the development of graphical user interfaces and bring advanced interaction techniques to the Java platform. SwingStates is based on the use of finite-state machines specified directly in Java to describe the behavior of interactive systems. State machines can be used to redefine the behavior of existing Swing widgets or, in combination with a new canvas widget that features a rich graphical model, to create brand new widgets. SwingStates also supports arbitrary input devices to implement novel interaction techniques based, for example, on bi-manual or pressure-sensitive input. We have used SwingStates in several Master's-level classes over the past two years and have developed a benchmark approach to evaluate the toolkit in this context. The results demonstrate that SwingStates can be used by non-expert developers with little training to successfully implement advanced interaction techniques. Copyright © 2007 John Wiley & Sons, Ltd.",
"Despite the availability of multimodal devices, there are very few commercial multimodal applications available. One reason for this may be the lack of a framework to support development of multimodal applications in reasonable time and with limited resources. This paper describes a multimodal framework enabling rapid development of applications using a variety of modalities and methods for ambiguity resolution, featuring a novel approach to multimodal fusion. An example application is studied that was created using the framework.",
"This paper presents a set of tools to support multimodal adaptive Web applications. The contributions include a novel solution for generating multimodal interactive applications, which can be executed in any browser-enabled device; and run-time support for obtaining multimodal adaptations at various granularity levels, which can be specified through a language for adaptation rules. The architecture is able to exploit model-based user interface descriptions and adaptation rules in order to achieve adaptive behaviour that can be triggered by dynamic changes in the context of use. We also report on an example application and a user test concerning adaptation rules changing dynamically its multimodality.",
"",
"A growing class of smartphone applications are tasking applications that run continuously, process data from sensors to determine the user's context (such as location) and activity, and optionally trigger certain actions when the right conditions occur. Many such tasking applications also involve coordination between multiple users or devices. Example tasking applications include location-based reminders, changing the ring-mode of a phone automatically depending on location, notifying when friends are nearby, disabling WiFi in favor of cellular data when moving at more than a certain speed outdoors, automatically tracking and storing movement tracks when driving, and inferring the number of steps walked each day. Today, these applications are non-trivial to develop, although they are often trivial for end users to state. Additionally, simple implementations can consume excessive amounts of energy. This paper proposes Code in the Air (CITA), a system which simplifies the rapid development of tasking applications. It enables non-expert end users to easily express simple tasks on their phone, and more sophisticated developers to write code for complex tasks by writing purely server-side scripts. CITA provides a task execution framework to automatically distribute and coordinate tasks, energy-efficient modules to infer user activities and compose them, and a push communication service for mobile devices that overcomes some shortcomings in existing push services.",
"Designing and implementing applications that can handle multiple recognition-based interaction technologies such as speech and gesture inputs is a difficult task. IMBuilder and MEngine are the two components of a new toolkit for rapidly creating and testing multimodal interface designs. First, an interaction model is specified in the form of a collection of finite state machines, using a simple graphical tool (IMBuilder). Then, this interaction model can be tested in a multimodal framework (MEngine) that automatically performs input recognition (speech and gesture) and modality integration. Developers can build complete multimodal applications without concerning themselves with the recognition engine internals and modality integration. Furthermore, several interaction models can be rapidly tested in order to achieve the best use and combination of input modalities with minimal implementation effort."
]
} |
1406.3161 | 2021757921 | Adaptive streaming addresses the increasing and heterogeneous demand of multimedia content over the Internet by offering several encoded versions for each video sequence. Each version (or representation) is characterized by a resolution and a bit rate, and it is aimed at a specific set of users, like TV or mobile phone clients. While most existing works on adaptive streaming deal with effective playout-buffer control strategies on the client side, in this article we take a providers' perspective and propose solutions to improve user satisfaction by optimizing the set of available representations. We formulate an integer linear program that maximizes users' average satisfaction, taking into account network dynamics, type of video content, and user population characteristics. The solution of the optimization is a set of encoding parameters corresponding to the representations set that maximizes user satisfaction. We evaluate this solution by simulating multiple adaptive streaming sessions characterized by realistic network statistics, showing that the proposed solution outperforms commonly used vendor recommendations, in terms of user satisfaction but also in terms of fairness and outage probability. The simulation results show that video content information as well as network constraints and users' statistics play a crucial role in selecting proper encoding parameters to provide fairness among users and to reduce network resource usage. We finally propose a few theoretical guidelines that can be used, in realistic settings, to choose the encoding parameters based on the user characteristics, the network capacity and the type of video content. | During the last decade, adaptive streaming has been an active research area, with most efforts aimed at developing server-controlled streaming solutions. Recently, a client-driven approach, based on HTTP-adaptive streaming @cite_7 @cite_1 , has gained popularity and attention. In this new paradigm, the clients decide which segments to get and when to request them, and the server mainly responds to the clients' requests. Different implementation of this new architecture have been proposed in various commercial DASH players @cite_10 . | {
"cite_N": [
"@cite_10",
"@cite_1",
"@cite_7"
],
"mid": [
"2157326593",
"2127034683",
"1972191802"
],
"abstract": [
"Adaptive (video) streaming over HTTP is gradually being adopted by content and network service providers, as it offers significant advantages in terms of both user-perceived quality and resource utilization. In this paper, we first focus on the rate-adaptation mechanisms of adaptive streaming and experimentally evaluate two major commercial players (Smooth Streaming and Netflix) and one open-source player (Adobe's OSMF). We first examine how the previous three players react to persistent and short-term changes in the underlying network available bandwidth. Do they quickly converge to the maximum sustainable bitrate? We identify major differences between the three players and significant inefficiencies in each of them. We then propose a new adaptation algorithm, referred to as AdapTech Streaming, which aims to address the problems with the previous three players. In the second part of the paper, we consider the following two questions. First, what happens when two adaptive video players compete for available bandwidth in the bottleneck link? Can they share that resource in a stable and fair manner? And second, how does adaptive streaming perform with live content? Is the player able to sustain a short playback delay, keeping the viewing experience ''live''?",
"MPEG has recently finalized a new standard to enable dynamic and adaptive streaming of media over HTTP. This standard aims to address the interoperability needs between devices and servers of various vendors. There is broad industry support for this new standard, which offers the promise of transforming the media-streaming landscape.",
"In this paper, we provide some insight and background into the Dynamic Adaptive Streaming over HTTP (DASH) specifications as available from 3GPP and in draft version also from MPEG. Specifically, the 3GPP version provides a normative description of a Media Presentation, the formats of a Segment, and the delivery protocol. In addition, it adds an informative description on how a DASH Client may use the provided information to establish a streaming service for the user. The solution supports different service types (e.g., On-Demand, Live, Time-Shift Viewing), different features (e.g., adaptive bitrate switching, multiple language support, ad insertion, trick modes, DRM) and different deployment options. Design principles and examples are provided."
]
} |
1406.3655 | 235856118 | We study the problem of evaluating a discrete function by adaptively querying the values of its variables until the values read uniquely determine the value of the function. Reading the value of a variable is done at the expense of some cost, and the goal is to design a strategy (decision tree) for evaluating the function incurring as little cost as possible in the worst case or in expectation (according to a prior distribution on the possible variables assignments). Except for particular cases of the problem, in general, only the minimization of one of these two measures is addressed in the literature. However, there are instances of the problem for which the minimization of one measure leads to a strategy with a high cost with respect to the other measure (even exponentially bigger than the optimal). We provide a new construction which can guarantee a trade-off between the two criteria. More precisely, given a decision tree guaranteeing expected cost @math and a decision tree guaranteeing worst cost @math our method can guarantee for any chosen trade-off value @math to produce a decision tree whose worst cost is @math and whose expected cost is @math These bounds are improved for the relevant case of uniform testing costs. Motivated by applications, we also study a variant of the problem where the cost of reading a variable depends on the variable's value. We provide an @math approximation algorithm for the minimization of the worst cost measure, which is best possible under the assumption @math . | In a recent paper @cite_11 , the authors show that for any instance @math of the DFEP, with @math objects, it is possible to construct in polynomial time a decision tree @math such that @math is @math and @math is @math , where @math and @math are, respectively, the minimum expected testing cost and the minimum worst testing cost for instance @math . | {
"cite_N": [
"@cite_11"
],
"mid": [
"2128028322"
],
"abstract": [
"In several applications of automatic diagnosis and active learning a central problem is the evaluation of a discrete function by adaptively querying the values of its variables until the values read uniquely determine the value of the function. In general reading the value of a variable is done at the expense of some cost (computational or possibly a fee to pay the corresponding experiment). The goal is to design a strategy for evaluating the function incurring little cost (in the worst case or in expectation according to a prior distribution on the possible variables' assignments). Our algorithm builds a strategy (decision tree) which attains a logarithmic approximation simultaneously for the expected and worst cost spent. This is best possible since, under standard complexity assumption, no algorithm can guarantee o(log n) approximation."
]
} |
1406.3655 | 235856118 | We study the problem of evaluating a discrete function by adaptively querying the values of its variables until the values read uniquely determine the value of the function. Reading the value of a variable is done at the expense of some cost, and the goal is to design a strategy (decision tree) for evaluating the function incurring as little cost as possible in the worst case or in expectation (according to a prior distribution on the possible variables assignments). Except for particular cases of the problem, in general, only the minimization of one of these two measures is addressed in the literature. However, there are instances of the problem for which the minimization of one measure leads to a strategy with a high cost with respect to the other measure (even exponentially bigger than the optimal). We provide a new construction which can guarantee a trade-off between the two criteria. More precisely, given a decision tree guaranteeing expected cost @math and a decision tree guaranteeing worst cost @math our method can guarantee for any chosen trade-off value @math to produce a decision tree whose worst cost is @math and whose expected cost is @math These bounds are improved for the relevant case of uniform testing costs. Motivated by applications, we also study a variant of the problem where the cost of reading a variable depends on the variable's value. We provide an @math approximation algorithm for the minimization of the worst cost measure, which is best possible under the assumption @math . | Note that the questions we are studying here are different and possibly more fundamental than those studied in @cite_11 : is it possible, even allowing exponential construction time, to build a decision tree whose expected cost is very close to the best possible expected cost achievable and whose worst testing cost is very close to the best possible worst case achievable? How close can we get or better what is the best trade off we can simultaneously guarantee? | {
"cite_N": [
"@cite_11"
],
"mid": [
"2128028322"
],
"abstract": [
"In several applications of automatic diagnosis and active learning a central problem is the evaluation of a discrete function by adaptively querying the values of its variables until the values read uniquely determine the value of the function. In general reading the value of a variable is done at the expense of some cost (computational or possibly a fee to pay the corresponding experiment). The goal is to design a strategy for evaluating the function incurring little cost (in the worst case or in expectation according to a prior distribution on the possible variables' assignments). Our algorithm builds a strategy (decision tree) which attains a logarithmic approximation simultaneously for the expected and worst cost spent. This is best possible since, under standard complexity assumption, no algorithm can guarantee o(log n) approximation."
]
} |
1406.3655 | 235856118 | We study the problem of evaluating a discrete function by adaptively querying the values of its variables until the values read uniquely determine the value of the function. Reading the value of a variable is done at the expense of some cost, and the goal is to design a strategy (decision tree) for evaluating the function incurring as little cost as possible in the worst case or in expectation (according to a prior distribution on the possible variables assignments). Except for particular cases of the problem, in general, only the minimization of one of these two measures is addressed in the literature. However, there are instances of the problem for which the minimization of one measure leads to a strategy with a high cost with respect to the other measure (even exponentially bigger than the optimal). We provide a new construction which can guarantee a trade-off between the two criteria. More precisely, given a decision tree guaranteeing expected cost @math and a decision tree guaranteeing worst cost @math our method can guarantee for any chosen trade-off value @math to produce a decision tree whose worst cost is @math and whose expected cost is @math These bounds are improved for the relevant case of uniform testing costs. Motivated by applications, we also study a variant of the problem where the cost of reading a variable depends on the variable's value. We provide an @math approximation algorithm for the minimization of the worst cost measure, which is best possible under the assumption @math . | A number of algorithms with different time complexities were proposed to construct decision trees with minimum expected path length (expected testing cost in DFEP terminology) among the decision trees with depth (worst testing cost) at most @math , where @math is a given integer @cite_1 @cite_18 @cite_15 . | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_15"
],
"mid": [
"2084029055",
"2013021781",
""
],
"abstract": [
"An algorithm that constructs an optimal height-restricted binary tree for a set of n weights in @math time, where L is the maximum permitted height, is presented. This is an improvement over the fastest previously known algorithm, which requires @math time. The algorithm is a hybrid, combining a technique by Hu and Tan with a technique by Michael Garey.",
"An algorithm is given for constructing a binary tree of minimum weighted path length for n nonnegative weights under the constraint that no path length exceed a given bound L. The number of operations required is proportional to @math . Such problems, which impose an additional constraint on the usual Huffman tree, arise in many applications, including computer file searching and the construction of optimal prefix codes under certain practical conditions.",
""
]
} |
1406.3655 | 235856118 | We study the problem of evaluating a discrete function by adaptively querying the values of its variables until the values read uniquely determine the value of the function. Reading the value of a variable is done at the expense of some cost, and the goal is to design a strategy (decision tree) for evaluating the function incurring as little cost as possible in the worst case or in expectation (according to a prior distribution on the possible variables assignments). Except for particular cases of the problem, in general, only the minimization of one of these two measures is addressed in the literature. However, there are instances of the problem for which the minimization of one measure leads to a strategy with a high cost with respect to the other measure (even exponentially bigger than the optimal). We provide a new construction which can guarantee a trade-off between the two criteria. More precisely, given a decision tree guaranteeing expected cost @math and a decision tree guaranteeing worst cost @math our method can guarantee for any chosen trade-off value @math to produce a decision tree whose worst cost is @math and whose expected cost is @math These bounds are improved for the relevant case of uniform testing costs. Motivated by applications, we also study a variant of the problem where the cost of reading a variable depends on the variable's value. We provide an @math approximation algorithm for the minimization of the worst cost measure, which is best possible under the assumption @math . | When the goal is to minimize only one measure (worst or expected testing cost), there are several algorithms in the literature to solve the particular version of the @math in which each object belongs to a distinct class ( @cite_9 @cite_10 @cite_7 @cite_12 @cite_4 @cite_3 @cite_8 @cite_2 ). Approximation algorithms for the general version of the problem, where the number of classes can be smaller than the number of objects, were presented by @cite_16 , @cite_5 and @cite_11 . For the minimization of the worst testing cost of DFEP, Moshkov has studied the problem in the general case of multiway tests and non-uniform costs and provided an @math -approximation in @cite_19 . Our algorithm in Section 3, generalizes Moshkov's algorithm to the value-dependent-test-cost variant of the DFEP Moshkov @cite_19 also proved that that no @math -approximation algorithm is possible under the standard complexity assumption @math The minimization of the worst testing cost is also investigated in @cite_6 under the framework of covering and learning. Both @cite_16 and @cite_5 show @math approximations for the expected testing cost (where @math is the minimum probability among the objects in @math ) ---- the former for binary tests, and the latter for multiway tests. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2950315042",
"2154838933",
"2951839510",
"",
"1522510201",
"2200983898",
"",
"2152124311",
"2951249809",
"2072001382",
"1843157236",
"1811083715",
"2128028322"
],
"abstract": [
"We analyze the expected cost of a greedy active learning algorithm. Our analysis extends previous work to a more general setting in which different queries have different costs. Moreover, queries may have more than two possible responses and the distribution over hypotheses may be non uniform. Specific applications include active learning with label costs, active learning for multiclass and partial label queries, and batch mode active learning. We also discuss an approximate version of interest when there are very many queries.",
"We consider the problem of constructing decision trees for entity identification from a given relational table. The input is a table containing information about a set of entities over a fixed set of attributes and a probability distribution over the set of entities that specifies the likelihood of the occurrence of each entity. The goal is to construct a decision tree that identifies each entity unambiguously by testing the attribute values such that the average number of tests is minimized. This classical problem finds such diverse applications as efficient fault detection, species identification in biology, and efficient diagnosis in the field of medicine. Prior work mainly deals with the special case where the input table is binary and the probability distribution over the set of entities is uniform. We study the general problem involving arbitrary input tables and arbitrary probability distributions over the set of entities. We consider a natural greedy algorithm and prove an approximation guarantee of O(rK • log N), where N is the number of entities and K is the maximum number of distinct values of an attribute. The value rK is a suitably defined Ramsey number, which is at most log K. We show that it is NP-hard to approximate the problem within a factor of Ω(log N), even for binary tables (i.e. K=2). Thus, for the case of binary tables, our approximation algorithm is optimal up to constant factors (since r2=2). In addition, our analysis indicates a possible way of resolving a Ramsey-theoretic conjecture by Erdos.",
"We introduce a natural generalization of submodular set cover and exact active learning with a finite hypothesis class (query learning). We call this new problem interactive submodular set cover. Applications include advertising in social networks with hidden information. We give an approximation guarantee for a novel greedy algorithm and give a hardness of approximation result which matches up to constant factors. We also discuss negative results for simpler approaches and present encouraging early experimental results.",
"",
"In the general search problem we want to identify a specific element using a set of allowed tests. The general goal is to minimize the number of tests performed, although different measures are used to capture this goal. In this work we introduce a novel greedy approach that achieves the best known approximation ratios simultaneously for many different variations of this identification problem. In addition to this flexibility, our algorithm admits much shorter and simpler analyses than previous greedy strategies. As a second contribution, we investigate the potential of greedy algorithms for the more restricted problem of identifying elements of partially ordered sets by comparison with other elements. We prove that the latter problem is as hard to approximate as the general identification problem. As a positive result, we show that a natural greedy strategy achieves an approximation ratio of 2 for tree-like posets, improving upon the previously best known 14-approximation for this problem.",
"We study simultaneous learning and covering problems: submodular set cover problems that depend on the solution to an active (query) learning problem. The goal is to jointly minimize the cost of both learning and covering. We extend recent work in this setting to allow for a limited amount of adversarial noise. Certain noisy query learning problems are a special case of our problem. Crucial to our analysis is a lemma showing the logical OR of two submodular cover constraints can be reduced to a single submodular set cover constraint. Combined with known results, this new lemma allows for arbitrary monotone circuits of submodular cover constraints to be reduced to a single constraint. As an example practical application, we present a movie recommendation website that minimizes the total cost of learning what the user wants to watch and recommending a set of movies.",
"",
"We consider the problem of constructing optimal decision trees: given a collection of tests which can disambiguate between a set of m possible diseases, each test having a cost, and the a-priori likelihood of the patient having any particular disease, what is a good adaptive strategy to perform these tests to minimize the expected cost to identify the disease? We settle the approximability of this problem by giving a tight O(logm)-approximation algorithm. We also consider a more substantial generalization, the Adaptive TSP problem, which can be used to model switching costs between tests in the optimal decision tree problem. Given an underlying metric space, a random subset S of cities is drawn from a known distribution, but S is initially unknown to us--we get information about whether any city is in S only when we visit the city in question. What is a good adaptive way of visiting all the cities in the random subset S while minimizing the expected distance traveled? For this adaptive TSP problem, we give the first poly-logarithmic approximation, and show that this algorithm is best possible unless we can improve the approximation guarantees for the well-known group Steiner tree problem.",
"We tackle the fundamental problem of Bayesian active learning with noise, where we need to adaptively select from a number of expensive tests in order to identify an unknown hypothesis sampled from a known prior distribution. In the case of noise-free observations, a greedy algorithm called generalized binary search (GBS) is known to perform near-optimally. We show that if the observations are noisy, perhaps surprisingly, GBS can perform very poorly. We develop EC2, a novel, greedy active learning algorithm and prove that it is competitive with the optimal policy, thus obtaining the first competitiveness guarantees for Bayesian active learning with noisy observations. Our bounds rely on a recently discovered diminishing returns property called adaptive submodularity, generalizing the classical notion of submodular set functions to adaptive policies. Our results hold even if the tests have non-uniform cost and their noise is correlated. We also propose EffECXtive, a particularly fast approximation of EC2, and evaluate it on a Bayesian experimental design problem involving human subjects, intended to tease apart competing economic theories of how people make decisions under uncertainty.",
"In applications such as active learning and disease fault diagnosis, one often encounters the problem of identifying an unknown object through a minimal number of queries. This problem has been referred to as query learning or object entity identification. We consider three extensions of this fundamental problem that are motivated by practical considerations in real-world,time-critical identification tasks such as emergency response. First, we consider the problem where the objects are partitioned into groups, and the goal is to identify only the group to which the object belongs. Second, we address the situation where the queries are partitioned into groups, and an algorithm may suggest a group of queries to a human user, who then selects the actual query. Third, we consider the problem of object identification in the presence of persistent query noise, and relate it to group identification. To address these problems we show that a standard algorithm for object identification, known as generalized binary search, may be viewed as a generalization of Shannon-Fano coding. We then extend this result to the group-based settings, leading to new algorithms, whose performance is demonstrated through a logarithmic approximation bound, and through experiments on simulated data and a database used for toxic chemical identification.",
"We introduce and study a problem that we refer to as the optimal split tree problem. The problem generalizes a number of problems including two classical tree construction problems including the Huffman tree problem and the optimal alphabetic tree. We show that the general split tree problem is NP-complete and analyze a greedy algorithm for its solution. We show that a simple modification of the greedy algorithm guarantees O(log n) approximation ratio. We construct an example for which this algorithm achieves Ω(log n log log n) approximation ratio. We show that if all weights are equal and the optimal split tree is of depth O(log n). then the greedy algorithm guarantees O(log n log log n) approximation ratio. We also extend our approximation algorithm to the construction of a search tree for partially ordered sets.",
"We give a (ln n+ 1)-approximation for the decision tree (DT) problem. An instance of DT is a set of mbinary tests T= (T 1 , ..., T m ) and a set of nitems X= (X 1 , ..., X n ). The goal is to output a binary tree where each internal node is a test, each leaf is an item and the total external path length of the tree is minimized. Total external path length is the sum of the depths of all the leaves in the tree. DT has a long history in computer science with applications ranging from medical diagnosis to experiment design. It also generalizes the problem of finding optimal average-case search strategies in partially ordered sets which includes several alphabetic tree problems. Our work decreases the previous upper bound on the approximation ratio by a constant factor. We provide a new analysis of the greedy algorithm that uses a simple accounting scheme to spread the cost of a tree among pairs of items split at a particular node. We conclude by showing that our upper bound also holds for the DT problem with weighted tests.",
"In several applications of automatic diagnosis and active learning a central problem is the evaluation of a discrete function by adaptively querying the values of its variables until the values read uniquely determine the value of the function. In general reading the value of a variable is done at the expense of some cost (computational or possibly a fee to pay the corresponding experiment). The goal is to design a strategy for evaluating the function incurring little cost (in the worst case or in expectation according to a prior distribution on the possible variables' assignments). Our algorithm builds a strategy (decision tree) which attains a logarithmic approximation simultaneously for the expected and worst cost spent. This is best possible since, under standard complexity assumption, no algorithm can guarantee o(log n) approximation."
]
} |
1406.3191 | 2295334878 | Space and time are two critical components of many real world systems. For this reason, analysis of anomalies in spatiotemporal data has been a great of interest. In this work, application of tensor decomposition and eigenspace techniques on spatiotemporal hotspot detection is investigated. An algorithm called SST-Hotspot is proposed which accounts for spatiotemporal variations in data and detect hotspots using matching of eigenvector elements of two cases and population tensors. The experimental results reveal the interesting application of tensor decomposition and eigenvector-based techniques in hotspot analysis. | Related spatiotemporal techniques can be divided into two main categories: scan statistics and clustering-based techniques. Clustering-based approaches @cite_16 @cite_12 @cite_6 @cite_8 @cite_7 are based on this idea that first, thresholds are inferred from the population data and then estimated thresholds are applied on clustering of data points of cases data. Clustering-based approaches have their own limitations and strengths. Their prominent benefit is that they provide exact shape of clusters opposed to the scan statistics-based methods where clusters necessarily should be a regular shape and are not realistic. On the other hand, handling complex data is not straightforward for clustering-based techniques. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_6",
"@cite_16",
"@cite_12"
],
"mid": [
"",
"1971022913",
"1992419399",
"2115715042",
"2124543451"
],
"abstract": [
"",
"This paper presents a new density-based clustering algorithm, ST-DBSCAN, which is based on DBSCAN. We propose three marginal extensions to DBSCAN related with the identification of (i) core objects, (ii) noise objects, and (iii) adjacent clusters. In contrast to the existing density-based clustering algorithms, our algorithm has the ability of discovering clusters according to non-spatial, spatial and temporal values of the objects. In this paper, we also present a spatial-temporal data warehouse system designed for storing and clustering a wide range of spatial-temporal data. We show an implementation of our algorithm by using this data warehouse and present the data mining results.",
"Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However, clustering is a difficult problem combinatorially, and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur. This paper presents an overviewof pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval.",
"CrimeStat is a spatial statistics program used in crime mapping. The program inputs incident or point locations and outputs statistics that can be displayed graphically in a geographic information systems (GIS) program. Among the routines are those for summary spatial description, hot spot analysis, interpolation, space–time analysis, and journey-to-crime modeling. Version 3.0 has a crime travel demand module for analyzing travel patterns over a metropolitan area. The program and documentation are distributed by the National Institute of Justice.",
"Security informatics is an emerging field of study focusing on the development and evaluation of advanced information technologies and systems for national and homeland security-related applications. Spatio-temporal hotspot analysis is an important component of security informatics since location and time are two critical aspects of most security-related events. The outputs of such analyses can provide useful information to guide the activities aimed at preventing, detecting, and responding to security problems. This paper reports a computational study carried out to evaluate the effectiveness of two prominent spatio-temporal hotspot analysis techniques, i.e., scan statistics and risk-adjusted clustering, in two selected security-related applications including infectious disease informatics and crime analysis. This paper also proposes a new technique based on support vector machines. Preliminary experiments have demonstrated positively that this new approach can be a viable analysis alternative in security informatics."
]
} |
1406.3692 | 2951720894 | Spear phishing is a complex targeted attack in which, an attacker harvests information about the victim prior to the attack. This information is then used to create sophisticated, genuine-looking attack vectors, drawing the victim to compromise confidential information. What makes spear phishing different, and more powerful than normal phishing, is this contextual information about the victim. Online social media services can be one such source for gathering vital information about an individual. In this paper, we characterize and examine a true positive dataset of spear phishing, spam, and normal phishing emails from Symantec's enterprise email scanning service. We then present a model to detect spear phishing emails sent to employees of 14 international organizations, by using social features extracted from LinkedIn. Our dataset consists of 4,742 targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack emails sent to 5,912 non victims; and publicly available information from their LinkedIn profiles. We applied various machine learning algorithms to this labeled data, and achieved an overall maximum accuracy of 97.76 in identifying spear phishing emails. We used a combination of social features from LinkedIn profiles, and stylometric features extracted from email subjects, bodies, and attachments. However, we achieved a slightly better accuracy of 98.28 without the social features. Our analysis revealed that social features extracted from LinkedIn do not help in identifying spear phishing emails. To the best of our knowledge, this is one of the first attempts to make use of a combination of stylometric features extracted from emails, and social features extracted from an online social network to detect targeted spear phishing emails. | provided the first empirical evidence about which malicious strategies are successful at deceiving general users @cite_15 . conducted a series of studies and experiments on creating and evaluating techniques for teaching people not to fall for phish @cite_23 @cite_10 @cite_16 . Lee studied data from Symantec's enterprise email scanning service, and calculated the odds ratio of being attacked for these users, based on their area of work. The results of this work indicated that users with subjects , and were both positively correlated with targeted attacks at more than 95 | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_23"
],
"mid": [
"2131906261",
"2071869991",
"1983581110",
"2162532690"
],
"abstract": [
"To build systems shielding users from fraudulent (or phishing) websites, designers need to know which attack strategies work and why. This paper provides the first empirical evidence about which malicious strategies are successful at deceiving general users. We first analyzed a large set of captured phishing attacks and developed a set of hypotheses about why these strategies might work. We then assessed these hypotheses with a usability study in which 22 participants were shown 20 web sites and asked to determine which ones were fraudulent. We found that 23 of the participants did not look at browser-based cues such as the address bar, status bar and the security indicators, leading to incorrect choices 40 of the time. We also found that some visual deception attacks can fool even the most sophisticated users. These results illustrate that standard security indicators are not effective for a substantial fraction of users, and suggest that alternative approaches are needed.",
"Phishing attacks, in which criminals lure Internet users to Web sites that spoof legitimate Web sites, are occurring with increasing frequency and are causing considerable harm to victims. While a great deal of effort has been devoted to solving the phishing problem by prevention and detection of phishing emails and phishing Web sites, little research has been done in the area of training users to recognize those attacks. Our research focuses on educating users about phishing and helping them make better trust decisions. We identified a number of challenges for end-user security education in general and anti-phishing education in particular: users are not motivated to learn about security; for most users, security is a secondary task; it is difficult to teach people to identify security threats without also increasing their tendency to misjudge nonthreats as threats. Keeping these challenges in mind, we developed an email-based anti-phishing education system called “PhishGuru” and an online game called “Anti-Phishing Phil” that teaches users how to use cues in URLs to avoid falling for phishing attacks. We applied learning science instructional principles in the design of PhishGuru and Anti-Phishing Phil. In this article we present the results of PhishGuru and Anti-Phishing Phil user studies that demonstrate the effectiveness of these tools. Our results suggest that, while automated detection systems should be used as the first line of defense against phishing attacks, user education offers a complementary approach to help people better recognize fraudulent emails and websites.",
"Phishing attacks, in which criminals lure Internet users to websites that impersonate legitimate sites, are occurring with increasing frequency and are causing considerable harm to victims. In this paper we describe the design and evaluation of an embedded training email system that teaches people about phishing during their normal use of email. We conducted lab experiments contrasting the effectiveness of standard security notices about phishing with two embedded training designs we developed. We found that embedded training works better than the current practice of sending security notices. We also derived sound design principles for embedded training systems.",
"In this paper we present the results of a roleplay survey instrument administered to 1001 online survey respondents to study both the relationship between demographics and phishing susceptibility and the effectiveness of several anti-phishing educational materials. Our results suggest that women are more susceptible than men to phishing and participants between the ages of 18 and 25 are more susceptible to phishing than other age groups. We explain these demographic factors through a mediation analysis. Educational materials reduced users' tendency to enter information into phishing webpages by 40 percent; however, some of the educational materials we tested also slightly decreased participants' tendency to click on legitimate links."
]
} |
1406.3692 | 2951720894 | Spear phishing is a complex targeted attack in which, an attacker harvests information about the victim prior to the attack. This information is then used to create sophisticated, genuine-looking attack vectors, drawing the victim to compromise confidential information. What makes spear phishing different, and more powerful than normal phishing, is this contextual information about the victim. Online social media services can be one such source for gathering vital information about an individual. In this paper, we characterize and examine a true positive dataset of spear phishing, spam, and normal phishing emails from Symantec's enterprise email scanning service. We then present a model to detect spear phishing emails sent to employees of 14 international organizations, by using social features extracted from LinkedIn. Our dataset consists of 4,742 targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack emails sent to 5,912 non victims; and publicly available information from their LinkedIn profiles. We applied various machine learning algorithms to this labeled data, and achieved an overall maximum accuracy of 97.76 in identifying spear phishing emails. We used a combination of social features from LinkedIn profiles, and stylometric features extracted from email subjects, bodies, and attachments. However, we achieved a slightly better accuracy of 98.28 without the social features. Our analysis revealed that social features extracted from LinkedIn do not help in identifying spear phishing emails. To the best of our knowledge, this is one of the first attempts to make use of a combination of stylometric features extracted from emails, and social features extracted from an online social network to detect targeted spear phishing emails. | To keep this work focused, we concentrate only on techniques proposed for detecting phishing emails; we do not cover all the techniques used for detecting phishing URLs or phishing websites in general. Abu- @cite_9 studied the performance of different classifiers used in text mining such as Logistic regression, classification and regression trees, Bayesian additive regression trees, Support Vector Machines, Random forests, and Neural networks. Their dataset consisted of a public collection of about 1,700 phishing mails, and 1,700 legitimate mails from private mailboxes. They focused on richness of word to classify phishing email based on 43 keywords. The features represent the frequency of bag-of-words" that appear in phishing and legitimate emails. However, the ever-evolving techniques and language used in phishing emails might make it hard for this approach to be effective over a long period of time. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2002964284"
],
"abstract": [
"There are many applications available for phishing detection. However, unlike predicting spam, there are only few studies that compare machine learning techniques in predicting phishing. The present study compares the predictive accuracy of several machine learning methods including Logistic Regression (LR), Classification and Regression Trees (CART), Bayesian Additive Regression Trees (BART), Support Vector Machines (SVM), Random Forests (RF), and Neural Networks (NNet) for predicting phishing emails. A data set of 2889 phishing and legitimate emails is used in the comparative study. In addition, 43 features are used to train and test the classifiers."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.