aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1504.06165
2282361021
Matrix factorization has found incredible success and widespread application as a collaborative filtering based approach to recommendations. Unfortunately, incorporating additional sources of evidence, especially ones that are incomplete and noisy, is quite difficult to achieve in such models, however, is often crucial for obtaining further gains in accuracy. For example, additional information about businesses from reviews, categories, and attributes should be leveraged for predicting user preferences, even though this information is often inaccurate and partially-observed. Instead of creating customized methods that are specific to each type of evidences, in this paper we present a generic approach to factorization of relational data that collectively models all the relations in the database. By learning a set of embeddings that are shared across all the relations, the model is able to incorporate observed information from all the relations, while also predicting all the relations of interest. Our evaluation on multiple Amazon and Yelp datasets demonstrates effective utilization of additional information for held-out preference prediction, but further, we present accurate models even for the cold-starting businesses and products for which we do not observe any ratings or reviews. We also illustrate the capability of the model in imputing missing information and jointly visualizing words, categories, and attribute factors.
The idea of using low-dimensional vectors as latent factors has found widespread use in recommendation systems. The task of suggesting products items to users is traditionally viewed as matrix completion where the sparse rating matrix with users as rows and items as columns is to be completed with predicted ratings. show how Singular Value Decomposition (SVD) can be used to decompose the rating matrix into low rank feature matrices to reduce dimensions of the rating matrix. This gave rise to the widely used matrix factorization techniques for predicting ratings @cite_9 in which the user and item factors capture the similarities amongst them. Conventional matrix factorization techniques predict ratings directly as the dot product of the factors of the user and the item, and use regularized least-squares as the loss function to optimize. Our model here however uses the interpretation of matrix factorization @cite_18 and uses the sigmoid function with log-likelihood as it is a generalization of PCA to binary matrices @cite_14 .
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_18" ], "mid": [ "2054141820", "2135001774", "2137245235" ], "abstract": [ "As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.", "Principal component analysis (PCA) is a commonly applied technique for dimensionality reduction. PCA implicitly minimizes a squared loss function, which may be inappropriate for data that is not real-valued, such as binary-valued data. This paper draws on ideas from the Exponential family, Generalized linear models, and Bregman distances, to give a generalization of PCA to loss functions that we argue are better suited to other data types. We describe algorithms for minimizing the loss functions, and give examples on simulated data.", "Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7 better than the score of Netflix's own system." ] }
1504.06165
2282361021
Matrix factorization has found incredible success and widespread application as a collaborative filtering based approach to recommendations. Unfortunately, incorporating additional sources of evidence, especially ones that are incomplete and noisy, is quite difficult to achieve in such models, however, is often crucial for obtaining further gains in accuracy. For example, additional information about businesses from reviews, categories, and attributes should be leveraged for predicting user preferences, even though this information is often inaccurate and partially-observed. Instead of creating customized methods that are specific to each type of evidences, in this paper we present a generic approach to factorization of relational data that collectively models all the relations in the database. By learning a set of embeddings that are shared across all the relations, the model is able to incorporate observed information from all the relations, while also predicting all the relations of interest. Our evaluation on multiple Amazon and Yelp datasets demonstrates effective utilization of additional information for held-out preference prediction, but further, we present accurate models even for the cold-starting businesses and products for which we do not observe any ratings or reviews. We also illustrate the capability of the model in imputing missing information and jointly visualizing words, categories, and attribute factors.
Our formulation of relational data, and the collective factorization model, can be easily extended. For example, the current formulation assumes at most a single relation exists between any specific pair of entities (since @math is independent of relation @math in Eq ). Although this assumption holds for many applications, we can extend this model to multiple relations between the same pair of entities by introducing latent factors for the relations, similar to CP-decomposition (or PARAFAC) and recently proposed RESCAL @cite_7 . obtains highly compressed representations of large triple stores by using RESCAL to represent them as Probabilistic Databases (PDBs) and presents methods to efficiently answer complex queries on PDBs by breaking them into sub-queries.
{ "cite_N": [ "@cite_7" ], "mid": [ "205829674" ], "abstract": [ "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute." ] }
1504.06201
2950848570
Most of the current boundary detection systems rely exclusively on low-level features, such as color and texture. However, perception studies suggest that humans employ object-level reasoning when judging if a particular pixel is a boundary. Inspired by this observation, in this work we show how to predict boundaries by exploiting object-level features from a pretrained object-classification network. Our method can be viewed as a "High-for-Low" approach where high-level object features inform the low-level boundary detection process. Our model achieves state-of-the-art performance on an established boundary detection benchmark and it is efficient to run. Additionally, we show that due to the semantic nature of our boundaries we can use them to aid a number of high-level vision tasks. We demonstrate that using our boundaries we improve the performance of state-of-the-art methods on the problems of semantic boundary labeling, semantic segmentation and object proposal generation. We can view this process as a "Low-for-High" scheme, where low-level boundaries aid high-level vision tasks. Thus, our contributions include a boundary detection system that is accurate, efficient, generalizes well to multiple datasets, and is also shown to improve existing state-of-the-art high-level vision methods on three distinct tasks.
Spectral methods formulate contour detection problem as an eigenvalue problem. The solution to this problem is then used to reason about the boundaries. The most successful approaches in this genre are the MCG detector @cite_5 , gPb detector @cite_1 , PMI detector @cite_31 , and Normalized Cuts @cite_21 .
{ "cite_N": [ "@cite_5", "@cite_31", "@cite_21", "@cite_1" ], "mid": [ "1991367009", "105270443", "2121947440", "2110158442" ], "abstract": [ "We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.", "Detecting boundaries between semantically meaningful objects in visual scenes is an important component of many vision algorithms. In this paper, we propose a novel method for detecting such boundaries based on a simple underlying principle: pixels belonging to the same object exhibit higher statistical dependencies than pixels belonging to different objects. We show how to derive an affinity measure based on this principle using pointwise mutual information, and we show that this measure is indeed a good predictor of whether or not two pixels reside on the same object. Using this affinity with spectral clustering, we can find object boundaries in the image – achieving state-of-the-art results on the BSDS500 dataset. Our method produces pixel-level accurate boundaries while requiring minimal feature engineering.", "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.", "This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications." ] }
1504.06201
2950848570
Most of the current boundary detection systems rely exclusively on low-level features, such as color and texture. However, perception studies suggest that humans employ object-level reasoning when judging if a particular pixel is a boundary. Inspired by this observation, in this work we show how to predict boundaries by exploiting object-level features from a pretrained object-classification network. Our method can be viewed as a "High-for-Low" approach where high-level object features inform the low-level boundary detection process. Our model achieves state-of-the-art performance on an established boundary detection benchmark and it is efficient to run. Additionally, we show that due to the semantic nature of our boundaries we can use them to aid a number of high-level vision tasks. We demonstrate that using our boundaries we improve the performance of state-of-the-art methods on the problems of semantic boundary labeling, semantic segmentation and object proposal generation. We can view this process as a "Low-for-High" scheme, where low-level boundaries aid high-level vision tasks. Thus, our contributions include a boundary detection system that is accurate, efficient, generalizes well to multiple datasets, and is also shown to improve existing state-of-the-art high-level vision methods on three distinct tasks.
Some of the notable discriminative boundary detection methods include sketch tokens (ST) @cite_17 , structured edges (SE) @cite_9 and sparse code gradients (SCG) @cite_0 . While SCG use supervised SVM learning @cite_15 , the latter two methods rely on a random forest classifier and models boundary detection as a classification task.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_15", "@cite_17" ], "mid": [ "2165140157", "1976047850", "2139212933", "2151049637" ], "abstract": [ "Finding contours in natural images is a fundamental problem that serves as the basis of many tasks such as image segmentation and object recognition. At the core of contour detection technologies are a set of hand-designed gradient features, used by most approaches including the state-of-the-art Global Pb (gPb) operator. In this work, we show that contour detection accuracy can be significantly improved by computing Sparse Code Gradients (SCG), which measure contrast using patch representations automatically learned through sparse coding. We use K-SVD for dictionary learning and Orthogonal Matching Pursuit for computing sparse codes on oriented local neighborhoods, and apply multi-scale pooling and power transforms before classifying them with linear SVMs. By extracting rich representations from pixels and avoiding collapsing them prematurely, Sparse Code Gradients effectively learn how to measure local contrasts and find contours. We improve the F-measure metric on the BSDS500 benchmark to 0.74 (up from 0.71 of gPb contours). Moreover, our learning approach can easily adapt to novel sensor data such as Kinect-style RGB-D cameras: Sparse Code Gradients on depth maps and surface normals lead to promising contour detection using depth and depth+color, as verified on the NYU Depth Dataset.", "Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains realtime performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets.", "The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.", "We propose a novel approach to both learning and detecting local contour-based representations for mid-level features. Our features, called sketch tokens, are learned using supervised mid-level information in the form of hand drawn contours in images. Patches of human generated contours are clustered to form sketch token classes and a random forest classifier is used for efficient detection in novel images. We demonstrate our approach on both top-down and bottom-up tasks. We show state-of-the-art results on the top-down task of contour detection while being over 200x faster than competing methods. We also achieve large improvements in detection accuracy for the bottom-up tasks of pedestrian and object detection as measured on INRIA and PASCAL, respectively. These gains are due to the complementary information provided by sketch tokens to low-level features such as gradient histograms." ] }
1504.06201
2950848570
Most of the current boundary detection systems rely exclusively on low-level features, such as color and texture. However, perception studies suggest that humans employ object-level reasoning when judging if a particular pixel is a boundary. Inspired by this observation, in this work we show how to predict boundaries by exploiting object-level features from a pretrained object-classification network. Our method can be viewed as a "High-for-Low" approach where high-level object features inform the low-level boundary detection process. Our model achieves state-of-the-art performance on an established boundary detection benchmark and it is efficient to run. Additionally, we show that due to the semantic nature of our boundaries we can use them to aid a number of high-level vision tasks. We demonstrate that using our boundaries we improve the performance of state-of-the-art methods on the problems of semantic boundary labeling, semantic segmentation and object proposal generation. We can view this process as a "Low-for-High" scheme, where low-level boundaries aid high-level vision tasks. Thus, our contributions include a boundary detection system that is accurate, efficient, generalizes well to multiple datasets, and is also shown to improve existing state-of-the-art high-level vision methods on three distinct tasks.
Recently there have been attempts to apply deep learning to the task of boundary detection. SCT @cite_10 is a sparse coding approach that reconstructs an image using a learned dictionary and then detect boundaries. Both @math fields @cite_14 and DeepNet @cite_27 approaches use Convolutional Neural Networks (CNNs) to predict edges. @math fields rely on dictionary learning and the use of the Nearest Neighbor algorithm within a CNN framework while DeepNet uses a traditional CNN architecture to predict contours.
{ "cite_N": [ "@cite_27", "@cite_14", "@cite_10" ], "mid": [ "2172014587", "", "97134437" ], "abstract": [ "This paper investigates visual boundary detection, i.e. prediction of the presence of a boundary at a given image location. We develop a novel neurally-inspired deep architecture for the task. Notable aspects of our work are (i) the use of features\" [Ranzato and Hinton, 2010] which depend on the squared response of a lter to the input image, and (ii) the integration of image information from multiple scales and semantic levels via multiple streams of interlinked, layered, and non-linear \" processing. Our results on the Berkeley Segmentation Data Set 500 (BSDS500) show comparable or better performance to the topperforming methods [, 2011, Ren and Bo, 2012, , 2013, Doll ar and Zitnick, 2013] with eective inference times. We also propose novel quantitative assessment techniques for improved method understanding and comparison. We carefully dissect the performance of our architecture, feature-types used and training methods, providing clear signals for model understanding and development.", "", "We frame the task of predicting a semantic labeling as a sparse reconstruction procedure that applies a target-specific learned transfer function to a generic deep sparse code representation of an image. This strategy partitions training into two distinct stages. First, in an unsupervised manner, we learn a set of dictionaries optimized for sparse coding of image patches. These generic dictionaries minimize error with respect to representing image appearance and are independent of any particular target task. We train a multilayer representation via recursive sparse dictionary learning on pooled codes output by earlier layers. Second, we encode all training images with the generic dictionaries and learn a transfer function that optimizes reconstruction of patches extracted from annotated ground-truth given the sparse codes of their corresponding image patches. At test time, we encode a novel image using the generic dictionaries and then reconstruct using the transfer function. The output reconstruction is a semantic labeling of the test image." ] }
1504.06201
2950848570
Most of the current boundary detection systems rely exclusively on low-level features, such as color and texture. However, perception studies suggest that humans employ object-level reasoning when judging if a particular pixel is a boundary. Inspired by this observation, in this work we show how to predict boundaries by exploiting object-level features from a pretrained object-classification network. Our method can be viewed as a "High-for-Low" approach where high-level object features inform the low-level boundary detection process. Our model achieves state-of-the-art performance on an established boundary detection benchmark and it is efficient to run. Additionally, we show that due to the semantic nature of our boundaries we can use them to aid a number of high-level vision tasks. We demonstrate that using our boundaries we improve the performance of state-of-the-art methods on the problems of semantic boundary labeling, semantic segmentation and object proposal generation. We can view this process as a "Low-for-High" scheme, where low-level boundaries aid high-level vision tasks. Thus, our contributions include a boundary detection system that is accurate, efficient, generalizes well to multiple datasets, and is also shown to improve existing state-of-the-art high-level vision methods on three distinct tasks.
The most similar to our approach is DeepEdge @cite_30 , which uses a multi-scale bifurcated network to perform contour detection using object-level features. However, we show that our method achieves better results even without the complicated multi-scale and bifurcated architecture of DeepEdge. Additionally, unlike DeepEdge, our system can run in near-real time.
{ "cite_N": [ "@cite_30" ], "mid": [ "1930528368" ], "abstract": [ "Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection." ] }
1504.06316
1596430668
Alice and Bob want to run a protocol over a noisy channel, where a certain number of bits are flipped adversarially. Several results take a protocol requiring @math bits of noise-free communication and make it robust over such a channel. In a recent breakthrough result, Haeupler described an algorithm that sends a number of bits that is conjectured to be near optimal in such a model. However, his algorithm critically requires @math knowledge of the number of bits that will be flipped by the adversary. We describe an algorithm requiring no such knowledge. If an adversary flips @math bits, our algorithm sends @math bits in expectation and succeeds with high probability in @math . It does so without any @math knowledge of @math . Assuming a conjectured lower bound by Haeupler, our result is optimal up to logarithmic factors. Our algorithm critically relies on the assumption of a private channel. We show that privacy is necessary when the amount of noise is unknown.
For @math bits to be transmitted from Alice to Bob, Shannon @cite_6 proposes an error correcting code of size @math that yields correct communication over a noisy channel with probability @math . At first glance, this may appear to solve our problem. But consider an interactive protocol with communication complexity @math , where Alice sends one bit, then Bob sends back one bit, and so forth where the value of each bit sent depends on the previous bits received . Two problems arise. First, using block codewords is not efficient; to achieve a small error probability, dummy'' bits may be added to each bit prior to encoding, but this results in a superlinear blowup in overhead. Second, due to the interactivity, an error that occurs in the past can ruin all computation that comes after it. Thus, error correcting codes fall short when dealing with interactive protocols.
{ "cite_N": [ "@cite_6" ], "mid": [ "2041404167" ], "abstract": [ "Scientific knowledge grows at a phenomenal pace--but few books have had as lasting an impact or played as important a role in our modern world as The Mathematical Theory of Communication, published originally as a paper on communication theory more than fifty years ago. Republished in book form shortly thereafter, it has since gone through four hardcover and sixteen paperback printings. It is a revolutionary work, astounding in its foresight and contemporaneity. The University of Illinois Press is pleased and honored to issue this commemorative reprinting of a classic." ] }
1504.06316
1596430668
Alice and Bob want to run a protocol over a noisy channel, where a certain number of bits are flipped adversarially. Several results take a protocol requiring @math bits of noise-free communication and make it robust over such a channel. In a recent breakthrough result, Haeupler described an algorithm that sends a number of bits that is conjectured to be near optimal in such a model. However, his algorithm critically requires @math knowledge of the number of bits that will be flipped by the adversary. We describe an algorithm requiring no such knowledge. If an adversary flips @math bits, our algorithm sends @math bits in expectation and succeeds with high probability in @math . It does so without any @math knowledge of @math . Assuming a conjectured lower bound by Haeupler, our result is optimal up to logarithmic factors. Our algorithm critically relies on the assumption of a private channel. We show that privacy is necessary when the amount of noise is unknown.
The seminal work of Schulman @cite_3 @cite_12 overcame these obstacles by describing a deterministic method for simulating interactive protocols on noisy channels with only a constant-factor increase in the total communication complexity. This work spurred vigorous interest in the area (see @cite_16 for an excellent survey).
{ "cite_N": [ "@cite_16", "@cite_12", "@cite_3" ], "mid": [ "1968936186", "2117696850", "2020278159" ], "abstract": [ "We highlight some recent progress and challenges in the area of interactive coding and information complexity.", "Communication is critical to distributed computing, parallel computing, or any situation in which automata interact-hence its significance as a resource in computation. In view of the likelihood of errors occurring in a lengthy interaction, it is desirable to incorporate this possibility in the model of communication. The author relates the noisy channel and the standard (noise less channel) complexities of a communication problem by establishing a 'two-way' or interactive analogue of Shanon's coding theorem: every noiseless channel protocol can be simulated by a private-coin noisy channel protocol whose time bound is proportional to the original (noiseless) time bound and inversely proportional to the capacity of the channel, while the protocol errs with vanishing probability. The method involves simulating the original protocol while implementing a hierarchical system of progress checks which ensure that errors of any magnitude in the simulation are, with high probability, rapidly eliminated. >", "" ] }
1504.06316
1596430668
Alice and Bob want to run a protocol over a noisy channel, where a certain number of bits are flipped adversarially. Several results take a protocol requiring @math bits of noise-free communication and make it robust over such a channel. In a recent breakthrough result, Haeupler described an algorithm that sends a number of bits that is conjectured to be near optimal in such a model. However, his algorithm critically requires @math knowledge of the number of bits that will be flipped by the adversary. We describe an algorithm requiring no such knowledge. If an adversary flips @math bits, our algorithm sends @math bits in expectation and succeeds with high probability in @math . It does so without any @math knowledge of @math . Assuming a conjectured lower bound by Haeupler, our result is optimal up to logarithmic factors. Our algorithm critically relies on the assumption of a private channel. We show that privacy is necessary when the amount of noise is unknown.
Schulman's scheme tolerates an adversarial noise rate of @math . It critically depends on the notion of a tree code for which an exponential-time construction was originally provided. This exponential construction time motivated work on more efficient constructions @cite_14 @cite_4 @cite_8 . There were also efforts to create alternative codes @cite_26 @cite_24 . Recently, elegant computationally-efficient schemes that tolerate a constant adversarial noise rate have been demonstrated @cite_27 @cite_17 . Additionally, a large number of powerful results have improved the tolerable adversarial noise rate @cite_15 @cite_10 @cite_19 @cite_13 @cite_0 .
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_4", "@cite_26", "@cite_8", "@cite_24", "@cite_19", "@cite_27", "@cite_0", "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "2216201412", "2022350090", "2031579086", "2026670837", "2949752753", "2090690502", "2040355376", "2072979474", "", "", "2003497308", "" ], "abstract": [ "Error correction and message authentication are well studied in the literature, and various efficient solutions have been suggested and analyzed. This is however not the case for data streams in which the message is very long, possibly infinite, and not known in advance to the sender. Trivial solutions for error-correcting and authenticating data streams either suffer from a long delay at the receiver’s end or cannot perform well when the communication channel is noisy.", "We present a deterministic operator on tree codes -- we call tree code product -- that allows one to deterministically combine two tree codes into a larger tree code. Moreover, if the original tree codes are efficiently encodable and decodable, then so is their product. This allows us to give the first deterministic subexponential-time construction of explicit tree codes: we are able to construct a tree code T of size n in time 2ne,. Moreover, T is also encodable and decodable in time 2ne,. We then apply our new construction to obtain a deterministic constant-rate error-correcting scheme for interactive computation over a noisy channel with random errors. If the length of the interactive computation is n, the amount of computation required is deterministically bounded by n1+o(1), and the probability of failure is n-ω(1).", "An improvement of the randomized construction for tree codes is presented. This construction, in contrary to the original one, almost surely gives a good code and uses smaller alphabet.", "We revisit the problem of reliable interactive communication over a noisy channel, and obtain the first fully explicit (randomized) efficient constant-rate emulation procedure for reliable interactive communication. Our protocol works for any discrete memory less noisy channel with constant capacity, and fails with exponentially small probability in the total length of the protocol. Following a work by Schulman [Schulman 1993] our simulation uses a tree-code, yet as opposed to the non-constructive absolute tree-code used by Schulman, we introduce a relaxation in the notion of goodness for a tree code and define a potent tree code. This relaxation allows us to construct an explicit emulation procedure for any two-party protocol. Our results also extend to the case of interactive multiparty communication. We show that a randomly generated tree code (with suitable constant alphabet size) is an efficiently decodable potent tree code with overwhelming probability. Furthermore we are able to partially derandomize this result by means of epsilon-biased distributions using only O(N) random bits, where N is the depth of the tree.", "We propose a new conjecture on some exponential sums. These particular sums have not apparently been considered in the literature. Subject to the conjecture we obtain the first effective construction of asymptotically good tree codes. The available numerical evidence is consistent with the conjecture and is sufficient to certify codes for significant-length communications.", "Systems with automatic feedback control may consist of several remote devices, connected only by unreliable communication channels. It is necessary in these conditions to have a method for accurate, real-time state estimation in the presence of channel noise. This problem is addressed, for the case of polynomial-growth-rate state spaces, through a new type of error-correcting code that is online and computationally efficient. This solution establishes a constructive analog, for some applications in estimation and control, of the Shannon coding theorem.", "We consider the task of interactive communication in the presence of adversarial errors and present tight bounds on the tolerable error-rates in a number of different settings. Most significantly, we explore adaptive interactive communication where the communicating parties decide who should speak next based on the history of the interaction. In particular, this decision can depend on estimates of the amount of errors that have occurred so far. Braverman and Rao [STOC'11] show that non-adaptively one can code for any constant error rate below 1 4 but not more. They asked whether this bound could be improved using adaptivity. We answer this open question in the affirmative (with a slightly different collection of resources): Our adaptive coding scheme tolerates any error rate below 2 7 and we show that tolerating a higher error rate is impossible. We also show that in the setting of [CRYPTO'13], where parties share randomness not known to the adversary, adaptivity increases the tolerable error rate from 1 2 to 2 3. For list-decodable interactive communications, where each party outputs a constant size list of possible outcomes, the tight tolerable error rate is 1 2. Our negative results hold even if the communication and computation are unbounded, whereas for our positive results communication and computations are polynomially bounded. Most prior work considered coding schemes with linear communication bounds, while allowing unbounded computations. We argue that studying tolerable error rates in this relaxed context helps to identify a setting's intrinsic optimal error rate. We set forward a strong working hypothesis which stipulates that for any setting the maximum tolerable error rate is independent of many computational and communication complexity measures. We believe this hypothesis to be a powerful guideline for the design of simple, natural, and efficient coding schemes and for understanding the (im)possibilities of coding for interactive communications.", "In this work, we study the problem of constructing interactive protocols that are robust to noise, a problem that was originally considered in the seminal works of Schulman (FOCS '92, STOC '93), and has recently regained popularity. Robust interactive communication is the interactive analogue of error correcting codes: Given an interactive protocol which is designed to run on an error-free channel, construct a protocol that evaluates the same function (or, more generally, simulates the execution of the original protocol) over a noisy channel. As in (non-interactive) error correcting codes, the noise can be either stochastic, i.e. drawn from some distribution, or adversarial, i.e. arbitrary subject only to a global bound on the number of errors. We show how to simulate any interactive protocol in the presence of constant-rate noise, while incurring only a constant blow-up in the communication complexity (CC). Our simulator is randomized, and succeeds in simulating the original protocol with probability at least @math .", "", "", "We show that it is possible to encode any communication protocol between two parties so that the protocol succeeds even if a (1 4-e) fraction of all symbols transmitted by the parties are corrupted adversarially, at a cost of increasing the communication in the protocol by a constant factor (the constant depends on epsilon). This encoding uses a constant sized alphabet. This improves on an earlier result of Schulman, who showed how to recover when the fraction of errors is bounded by 1 240. We also show how to simulate an arbitrary protocol with a protocol using the binary alphabet, a constant factor increase in communication and tolerating a (1 8-e) fraction of errors.", "" ] }
1504.06316
1596430668
Alice and Bob want to run a protocol over a noisy channel, where a certain number of bits are flipped adversarially. Several results take a protocol requiring @math bits of noise-free communication and make it robust over such a channel. In a recent breakthrough result, Haeupler described an algorithm that sends a number of bits that is conjectured to be near optimal in such a model. However, his algorithm critically requires @math knowledge of the number of bits that will be flipped by the adversary. We describe an algorithm requiring no such knowledge. If an adversary flips @math bits, our algorithm sends @math bits in expectation and succeeds with high probability in @math . It does so without any @math knowledge of @math . Assuming a conjectured lower bound by Haeupler, our result is optimal up to logarithmic factors. Our algorithm critically relies on the assumption of a private channel. We show that privacy is necessary when the amount of noise is unknown.
The closest prior work to ours is that of Haeupler @cite_21 . His work assumes a fixed and known adversarial noise rate @math , the fraction of bits flipped by the adversary. Communication efficiency is measured by which is @math divided by the total number of bits sent. Haeupler @cite_21 describes an algorithm that achieves a communication rate of @math , which he conjectures to be optimal. We compare our work to his in .
{ "cite_N": [ "@cite_21" ], "mid": [ "2026591308" ], "abstract": [ "We provide the first capacity approaching coding schemes that robustly simulate any interactive protocol over an adversarial channel that corrupts any @math fraction of the transmitted symbols. Our coding schemes achieve a communication rate of @math over any adversarial channel. This can be improved to @math for random, oblivious, and computationally bounded channels, or if parties have shared randomness unknown to the channel. Surprisingly, these rates exceed the @math interactive channel capacity bound which [Kol and Raz; STOC'13] recently proved for random errors. We conjecture @math and @math to be the optimal rates for their respective settings and therefore to capture the interactive channel capacity for random and adversarial errors. In addition to being very communication efficient, our randomized coding schemes have multiple other advantages. They are computationally efficient, extremely natural, and significantly simpler than prior (non-capacity approaching) schemes. In particular, our protocols do not employ any coding but allow the original protocol to be performed as-is, interspersed only by short exchanges of hash values. When hash values do not match, the parties backtrack. Our approach is, as we feel, by far the simplest and most natural explanation for why and how robust interactive communication in a noisy environment is possible." ] }
1504.06316
1596430668
Alice and Bob want to run a protocol over a noisy channel, where a certain number of bits are flipped adversarially. Several results take a protocol requiring @math bits of noise-free communication and make it robust over such a channel. In a recent breakthrough result, Haeupler described an algorithm that sends a number of bits that is conjectured to be near optimal in such a model. However, his algorithm critically requires @math knowledge of the number of bits that will be flipped by the adversary. We describe an algorithm requiring no such knowledge. If an adversary flips @math bits, our algorithm sends @math bits in expectation and succeeds with high probability in @math . It does so without any @math knowledge of @math . Assuming a conjectured lower bound by Haeupler, our result is optimal up to logarithmic factors. Our algorithm critically relies on the assumption of a private channel. We show that privacy is necessary when the amount of noise is unknown.
Feinerman, Haeupler and Korman @cite_23 recently studied the interesting related problem of spreading a single-bit rumor in a noisy network. In their framework, in each synchronous round, each agent can deliver a single bit to a random anonymous agent. This bit is flipped independently at random with probability @math for some fixed @math . Their algorithm ensures with high probability that in @math rounds and with @math messages, all nodes learn the correct rumor. They also present a majority-consensus algorithm with the same resource costs, and prove these resource costs are optimal for both problems.
{ "cite_N": [ "@cite_23" ], "mid": [ "1993649018" ], "abstract": [ "Distributed computing models typically assume reliable communication between processors. While such assumptions often hold for engineered networks, e.g., due to underlying error correction protocols, their relevance to biological systems, wherein messages are often distorted before reaching their destination, is quite limited. In this study we aim at bridging this gap by rigorously analyzing a model of communication in large anonymous populations composed of simple agents which interact through short and highly unreliable messages. We focus on the rumor-spreading problem and the majority-consensus problem, two fundamental tasks in distributed computing, and initiate their study under communication noise. Our model for communication is extremely weak and follows the push gossip communication paradigm: In each synchronous round each agent that wishes to send information delivers a message to a random anonymous agent. This communication is further restricted to contain only one bit (essentially representing an opinion). Lastly, the system is assumed to be so noisy that the bit in each message sent is flipped independently with probability 1 2-e, for some small Ae >0. Even in this severely restricted, stochastic and noisy setting we give natural protocols that solve the noisy rumor-spreading and the noisy majority-consensus problems efficiently. Our protocols run in O(log n e2) rounds and use O(n log n e2) messages bits in total, where n is the number of agents. These bounds are asymptotically optimal and, in fact, are as fast and message efficient as if each agent would have been simultaneously informed directly by the source. Our efficient, robust, and simple algorithms suggest balancing between silence and transmission, synchronization, and majority-based decisions as important ingredients towards understanding collective communication schemes in anonymous and noisy populations." ] }
1504.05811
2294712518
Definition of an accurate system model for Automated Planner (AP) is often impractical, especially for real-world problems. Conversely, off-the-shelf planners fail to scale up and are domain dependent. These drawbacks are inherited from conventional transition systems such as Finite State Machines (FSMs) that describes the action-plan execution generated by the AP. On the other hand, Behavior Trees (BTs) represent a valid alternative to FSMs presenting many advantages in terms of modularity, reactiveness, scalability and domain-independence. In this paper, we propose a model-free AP framework using Genetic Programming (GP) to derive an optimal BT for an autonomous agent to achieve a given goal in unknown (but fully observable) environments. We illustrate the proposed framework using experiments conducted with an open source benchmark Mario AI for automated generation of BTs that can play the game character Mario to complete a certain level at various levels of difficulty to include enemies and obstacles.
BTs are originally used in gaming industry where the computer (autonomous) player uses BTs for its decision making. Recently, there has been works to improve a BT using several learning techniques, for example, Q-learning @cite_25 and evolutionary approaches @cite_7 @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_25", "@cite_7" ], "mid": [ "1524728167", "2007506196", "1608608607" ], "abstract": [ "Behaviour trees provide the possibility of improving on existing Artificial Intelligence techniques in games by being simple to implement, scalable, able to handle the complexity of games, and modular to improve reusability. This ultimately improves the development process for designing automated game players. We cover here the use of behaviour trees to design and develop an AI-controlled player for the commercial real-time strategy game DEFCON. In particular, we evolved behaviour trees to develop a competitive player which was able to outperform the game’s original AI-bot more than 50 of the time. We aim to highlight the potential for evolving behaviour trees as a practical approach to developing AI-bots in games.", "Artificial intelligence has become an increasingly important aspect of computer game technology, as designers attempt to deliver engaging experiences for players by creating characters with behavioural realism to match advances in graphics and physics. Recently, behaviour trees have come to the forefront of games AI technology, providing a more intuitive approach than previous techniques such as hierarchical state machines, which often required complex data structures producing poorly structured code when scaled up. The design and creation of behaviour trees, however, requires experience and effort. This research introduces Q-learning behaviour trees (QL-BT), a method for the application of reinforcement learning to behaviour tree design. The technique facilitates AI designers' use of behaviour trees by assisting them in identifying the most appropriate moment to execute each branch of AI logic, as well as providing an implementation that can be used to debug, analyse and optimize early behaviour tree prototypes. Initial experiments demonstrate that behaviour trees produced by the QL-BT algorithm effectively integrate RL, automate tree design, and are human-readable.", "This paper investigates the applicability of Genetic Programming type systems to dynamic game environments. Grammatical Evolution was used to evolved Behaviour Trees, in order to create controllers for the Mario AI Benchmark. The results obtained reinforce the applicability of evolutionary programming systems to the development of artificial intelligence in games, and in dynamic systems in general, illustrating their viability as an alternative to more standard AI techniques." ] }
1504.05811
2294712518
Definition of an accurate system model for Automated Planner (AP) is often impractical, especially for real-world problems. Conversely, off-the-shelf planners fail to scale up and are domain dependent. These drawbacks are inherited from conventional transition systems such as Finite State Machines (FSMs) that describes the action-plan execution generated by the AP. On the other hand, Behavior Trees (BTs) represent a valid alternative to FSMs presenting many advantages in terms of modularity, reactiveness, scalability and domain-independence. In this paper, we propose a model-free AP framework using Genetic Programming (GP) to derive an optimal BT for an autonomous agent to achieve a given goal in unknown (but fully observable) environments. We illustrate the proposed framework using experiments conducted with an open source benchmark Mario AI for automated generation of BTs that can play the game character Mario to complete a certain level at various levels of difficulty to include enemies and obstacles.
In a work by Perez et. al. @cite_7 , the authors used GE to evolve BTs to create a AI controller for an autonomous agent (game character). Despite being the most relevant work, we depart from their work by using a metaheuristic evolutionary learning algorithms instead of grammatical evolution as the GP algorithm provides a natural way of manipulating BTs and applying genetic operators.
{ "cite_N": [ "@cite_7" ], "mid": [ "1608608607" ], "abstract": [ "This paper investigates the applicability of Genetic Programming type systems to dynamic game environments. Grammatical Evolution was used to evolved Behaviour Trees, in order to create controllers for the Mario AI Benchmark. The results obtained reinforce the applicability of evolutionary programming systems to the development of artificial intelligence in games, and in dynamic systems in general, illustrating their viability as an alternative to more standard AI techniques." ] }
1504.05811
2294712518
Definition of an accurate system model for Automated Planner (AP) is often impractical, especially for real-world problems. Conversely, off-the-shelf planners fail to scale up and are domain dependent. These drawbacks are inherited from conventional transition systems such as Finite State Machines (FSMs) that describes the action-plan execution generated by the AP. On the other hand, Behavior Trees (BTs) represent a valid alternative to FSMs presenting many advantages in terms of modularity, reactiveness, scalability and domain-independence. In this paper, we propose a model-free AP framework using Genetic Programming (GP) to derive an optimal BT for an autonomous agent to achieve a given goal in unknown (but fully observable) environments. We illustrate the proposed framework using experiments conducted with an open source benchmark Mario AI for automated generation of BTs that can play the game character Mario to complete a certain level at various levels of difficulty to include enemies and obstacles.
Scheper et. al @cite_30 applied evolutionary learning to BTs for a real-world robotic (Micro Air Vehicle) application. It appears as the first real-world robotic application of evolving BTs. They used a (sub-optimal) manually crafted BT as an initial BT in the evolutionary learning process, and conducted experiments with a flying robot, while the BT that controls the robot is learning itself in every experiment. Finally, they demonstrated significant improvement in the performances of the evolved final BT comparing to the initial user-defined BT. While we take inspirations from this work, the downside is that this work require an initial BT for it to work, which goes against our model-free objective.
{ "cite_N": [ "@cite_30" ], "mid": [ "1910406352" ], "abstract": [ "Evolutionary Robotics allows robots with limited sensors and processing to tackle complex tasks by means of sensory-motor coordination. In this article we show the first application of the Behavior Tree framework on a real robotic platform using the evolutionary robotics methodology. This framework is used to improve the intelligibility of the emergent robotic behavior over that of the traditional neural network formulation. As a result, the behavior is easier to comprehend and manually adapt when crossing the reality gap from simulation to reality. This functionality is shown by performing real-world flight tests with the 20-g DelFly Explorer flapping wing micro air vehicle equipped with a 4-g onboard stereo vision system. The experiments show that the DelFly can fully autonomously search for and fly through a window with only its onboard sensors and processing. The success rate of the optimized behavior in simulation is 88 , and the corresponding real-world performance is 54 after user adaptation. Although this leaves room for improvement, it is higher than the 46 success rate from a tuned user-defined controller." ] }
1504.05767
2118717252
Neural network algorithms simulated on standard computing platforms typically make use of high resolution weights, with floating-point notation. However, for dedicated hardware implementations of such algorithms, fixed-point synaptic weights with low resolution are preferable. The basic approach of reducing the resolution of the weights in these algorithms by standard rounding methods incurs drastic losses in performance. To reduce the resolution further, in the extreme case even to binary weights, more advanced techniques are necessary. To this end, we propose two methods for mapping neural network algorithms with high resolution weights to corresponding algorithms that work with low resolution weights and demonstrate that their performance is substantially better than standard rounding. We further use these methods to investigate the performance of three common neural network algorithms under fixed memory size of the weight matrix with different weight resolutions. We show that dedicated hardware systems, whose technology dictates very low weight resolutions (be they electronic or biological) could in principle implement the algorithms we study.
In the computational neuroscience domain, a method for using low resolution synapses is presented in @cite_9 , in which a spiking neural network is trained using an STDP learning rule. However @cite_9 is only applicable to one specific learning rule and algorithm, a version of expectation-maximisation. In contrast we propose methods that work for several common neural network algorithms among them both discriminative and generative models.
{ "cite_N": [ "@cite_9" ], "mid": [ "1979879449" ], "abstract": [ "Memristors have recently emerged as promising circuit elements to mimic the function of biological synapses in neuromorphic computing. The fabrication of reliable nanoscale memristive synapses, that feature continuous conductance changes based on the timing of pre- and postsynaptic spikes, has however turned out to be challenging. In this article, we propose an alternative approach, the compound memristive synapse, that circumvents this problem by the use of memristors with binary memristive states. A compound memristive synapse employs multiple bistable memristors in parallel to jointly form one synapse, thereby providing a spectrum of synaptic efficacies. We investigate the computational implications of synaptic plasticity in the compound synapse by integrating the recently observed phenomenon of stochastic filament formation into an abstract model of stochastic switching. Using this abstract model, we first show how standard pulsing schemes give rise to spike-timing dependent plasticity (STDP) with a stabilizing weight dependence in compound synapses. In a next step, we study unsupervised learning with compound synapses in networks of spiking neurons organized in a winner-take-all architecture. Our theoretical analysis reveals that compound-synapse STDP implements generalized Expectation-Maximization in the spiking network. Specifically, the emergent synapse configuration represents the most salient features of the input distribution in a Mixture-of-Gaussians generative model. Furthermore, the network’s spike response to spiking input streams approximates a well-defined Bayesian posterior distribution. We show in computer simulations how such networks learn to represent high-dimensional distributions over images of handwritten digits with high fidelity even in presence of substantial device variations and under severe noise conditions. Therefore, the compound memristive synapse may provide a synaptic design principle for future neuromorphic architectures." ] }
1504.06093
2147546255
There are over 1.2 million applications on the Google Play store today with a large number of competing applications for any given use or function. This creates challenges for users in selecting the right application. Moreover, some of the applications being of dubious origin, there are no mechanisms for users to understand who the applications are talking to, and to what extent. In our work, we first develop a lightweight characterization methodology that can automatically extract descriptions of application network behavior, and apply this to a large selection of applications from the Google App Store. We find several instances of overly aggressive communication with tracking websites, of excessive communication with ad related sites, and of communication with sites previously associated with malware activity. Our results underscore the need for a tool to provide users more visibility into the communication of apps installed on their mobile devices. To this end, we develop an Android application to do just this; our application monitors outgoing traffic, associates it with particular applications, and then identifies destinations in particular categories that we believe suspicious or else important to reveal to the end-user.
Application profiling: Given the lack of insights associated with the Android app store, a number of studies have focused on profiling mobile apps. @cite_17 , authors describe a multi-layer profiling approach that covers both system and network aspects. While capable of obtaining detailed behavioral profiles, the methodology is difficult to scale to a large number of applications, and cannot be implemented as an Android app. In contrast, our (lightweight) approach can characterize a large number of applications. Different from @cite_17 our study focuses on many aspects of the destinations being connected to by the app. @cite_13 the authors describe techniques to fingerprint mobile apps based on their network behavior. Our work does not attempt to find application signatures, but to characterize and compare the network behavior of different apps.
{ "cite_N": [ "@cite_13", "@cite_17" ], "mid": [ "1988036170", "2158888459" ], "abstract": [ "An enormous number of apps have been developed for Android in recent years, making it one of the most popular mobile operating systems. However, the quality of the booming apps can be a concern [4]. Poorly engineered apps may contain security vulnerabilities that can severally undermine users' security and privacy. In this paper, we study a general category of vulnerabilities found in Android apps, namely the component hijacking vulnerabilities. Several types of previously reported app vulnerabilities, such as permission leakage, unauthorized data access, intent spoofing, and etc., belong to this category. We propose CHEX, a static analysis method to automatically vet Android apps for component hijacking vulnerabilities. Modeling these vulnerabilities from a data-flow analysis perspective, CHEX analyzes Android apps and detects possible hijack-enabling flows by conducting low-overhead reachability tests on customized system dependence graphs. To tackle analysis challenges imposed by Android's special programming paradigm, we employ a novel technique to discover component entry points in their completeness and introduce app splitting to model the asynchronous executions of multiple entry points in an app. We prototyped CHEX based on Dalysis, a generic static analysis framework that we built to support many types of analysis on Android app bytecode. We evaluated CHEX with 5,486 real Android apps and found 254 potential component hijacking vulnerabilities. The median execution time of CHEX on an app is 37.02 seconds, which is fast enough to be used in very high volume app vetting and testing scenarios.", "We examine two privacy controls for Android smartphones that empower users to run permission-hungry applications while protecting private data from being exfiltrated: (1) covertly substituting shadow data in place of data that the user wants to keep private, and (2) blocking network transmissions that contain data the user made available to the application for on-device use only. We retrofit the Android operating system to implement these two controls for use with unmodified applications. A key challenge of imposing shadowing and exfiltration blocking on existing applications is that these controls could cause side effects that interfere with user-desired functionality. To measure the impact of side effects, we develop an automated testing methodology that records screenshots of application executions both with and without privacy controls, then automatically highlights the visual differences between the different executions. We evaluate our privacy controls on 50 applications from the Android Market, selected from those that were both popular and permission-hungry. We find that our privacy controls can successfully reduce the effective permissions of the application without causing side effects for 66 of the tested applications. The remaining 34 of applications implemented user-desired functionality that required violating the privacy requirements our controls were designed to enforce; there was an unavoidable choice between privacy and user-desired functionality." ] }
1504.06093
2147546255
There are over 1.2 million applications on the Google Play store today with a large number of competing applications for any given use or function. This creates challenges for users in selecting the right application. Moreover, some of the applications being of dubious origin, there are no mechanisms for users to understand who the applications are talking to, and to what extent. In our work, we first develop a lightweight characterization methodology that can automatically extract descriptions of application network behavior, and apply this to a large selection of applications from the Google App Store. We find several instances of overly aggressive communication with tracking websites, of excessive communication with ad related sites, and of communication with sites previously associated with malware activity. Our results underscore the need for a tool to provide users more visibility into the communication of apps installed on their mobile devices. To this end, we develop an Android application to do just this; our application monitors outgoing traffic, associates it with particular applications, and then identifies destinations in particular categories that we believe suspicious or else important to reveal to the end-user.
Different from taint analysis approaches, SpanDex @cite_19 extends the Android Dalvik Virtual Machine to ensure that apps do not leak users' passwords. SpanDex analyzes implicit flows using techniques from symbolic execution to quantify the amount of information a process control flow reveals about a secret. ipShield @cite_3 performs monitoring of every sensor accessed by an app, and uses this information to perform privacy risk assessment. Both SpanDex and ipShield require modifications in the mobile device OS, while they are not focusing on suspicious destinations. Finally, in @cite_15 the authors study the privacy and security risks posed by embedded or in-app advertisement libraries, used in current smartphones. Our study is much more generic, seeking to identify various characteristics of the mobile apps network behavior.
{ "cite_N": [ "@cite_19", "@cite_15", "@cite_3" ], "mid": [ "2212039644", "2087804676", "2237686084" ], "abstract": [ "This paper presents SpanDex, a set of extensions to Android's Dalvik virtual machine that ensures apps do not leak users' passwords. The primary technical challenge addressed by SpanDex is precise, sound, and efficient handling of implicit information flows (e.g., information transferred by a program's control flow). SpanDex handles implicit flows by borrowing techniques from symbolic execution to precisely quantify the amount of information a process' control flow reveals about a secret. To apply these techniques at runtime without sacrificing performance, SpanDex runs untrusted code in a data-flow sensitive sandbox, which limits the mix of operations that an app can perform on sensitive data. Experiments with a SpanDex prototype using 50 popular Android apps and an analysis of a large list of leaked passwords predicts that for 90 of users, an attacker would need over 80 login attempts to guess their password. Today the same attacker would need only one attempt for all users.", "In recent years, there has been explosive growth in smartphone sales, which is accompanied with the availability of a huge number of smartphone applications (or simply apps). End users or consumers are attracted by the many interesting features offered by these devices and the associated apps. The developers of these apps are also benefited by the prospect of financial compensation, either by selling their apps directly or by embedding one of the many ad libraries available on smartphone platforms. In this paper, we focus on potential privacy and security risks posed by these embedded or in-app advertisement libraries (henceforth \"ad libraries,\" for brevity). To this end, we study the popular Android platform and collect 100,000 apps from the official Android Market in March-May, 2011. Among these apps, we identify 100 representative in-app ad libraries (embedded in 52.1 of them) and further develop a system called AdRisk to systematically identify potential risks. In particular, we first decouple the embedded ad libraries from host apps and then apply our system to statically examine the ad libraries, ranging from whether they will upload privacy-sensitive information to remote (ad) servers or whether they will download untrusted code from remote servers. Our results show that most existing ad libraries collect private information: some of them may be used for legitimate targeting purposes (i.e., the user's location) while others are hard to justify by invasively collecting the information such as the user's call logs, phone number, browser bookmarks, or even the list of installed apps on the phone. Moreover, additional ones go a step further by making use of an unsafe mechanism to directly fetch and run code from the Internet, which immediately leads to serious security risks. Our investigation indicates the symbiotic relationship between embedded ad libraries and host apps is one main reason behind these exposed risks. These results clearly show the need for better regulating the way ad libraries are integrated in Android apps.", "Smart phones are used to collect and share personal data with untrustworthy third-party apps, often leading to data misuse and privacy violations. Unfortunately, state-of-the-art privacy mechanisms on Android provide inadequate access control and do not address the vulnerabilities that arise due to unmediated access to so-called innocuous sensors on these phones. We present ipShield, a framework that provides users with greater control over their resources at runtime. ipShield performs monitoring of every sensor accessed by an app and uses this information to perform privacy risk assessment. The risks are conveyed to the user as a list of possible inferences that can be drawn using the shared sensor data. Based on user-configured lists of allowed and private inferences, a recommendation consisting of binary privacy actions on individual sensors is generated. Finally, users are provided with options to override the recommended actions and manually configure context-aware fine-grained privacy rules. We implemented ipShield by modifying the AOSP on a Nexus 4 phone. Our evaluation indicates that running ipShield incurs negligible CPU and memory overhead and only a small reduction in battery life." ] }
1504.05298
1849313446
Our aim is to estimate the perspective-effected geometric distortion of a scene from a video feed. In contrast to all previous work we wish to achieve this using from low-level, spatio-temporally local motion features used in commercial semi-automatic surveillance systems. We: (i) describe a dense algorithm which uses motion features to estimate the perspective distortion at each image locus and then polls all such local estimates to arrive at the globally best estimate, (ii) present an alternative coarse algorithm which subdivides the image frame into blocks, and uses motion features to derive block-specific motion characteristics and constrain the relationships between these characteristics, with the perspective estimate emerging as a result of a global optimization scheme, and (iii) report the results of an evaluation using nine large sets acquired using existing close-circuit television (CCTV) cameras. Our findings demonstrate that both of the proposed methods are successful, their accuracy matching that of human labelling using complete visual data.
When there are known correspondences between 3D world points and their 2D projections, there is a series of algorithms that have been described in the literature which successfully handle cases for different numbers of available correspondences using different (usually iterative) optimization schemes @cite_18 @cite_16 . When no explicit world-to-camera mapping data is available but the camera is in motion and the scene mostly static, perceived (image) motion and different types of constraints (e.g. probabilistic, epipolar, or motion parallax based) can be used instead @cite_11 @cite_24 . Stronger yet constraints must be employed when it is not possible to obtain 3D-to-2D correspondences and the camera is static. For example for built-up scenes outline maps of buildings have been used with success by a number of researchers @cite_2 . Similarly, in urban or indoor scenes the presence of many parallel lines (e.g corridor or street boundaries) and their convergence towards the same vanishing point can be used to estimate the perspective @cite_14 @cite_22 . Yet others learn appearance of different types of elementary structures which allows them to build an approximate model of the scene @cite_4 @cite_3 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_22", "@cite_3", "@cite_24", "@cite_2", "@cite_16", "@cite_11" ], "mid": [ "2105340458", "2143676958", "", "", "2151992422", "1603094905", "2113644556", "", "2121350113" ], "abstract": [ "We introduce a framework for computing statistically optimal estimates of geometric reconstruction problems. While traditional algorithms often suffer from either local minima or non-optimality--or a combination of both--we pursue the goal of achieving global solutions of the statistically optimal cost-function. Our approach is based on a hierarchy of convex relaxations to solve non-convex optimization problems with polynomials. These convex relaxations generate a monotone sequence of lower bounds and we show how one can detect whether the global optimum is attained at a given relaxation. The technique is applied to a number of classical vision problems: triangulation, camera pose, homography estimation and last, but not least, epipolar geometry estimation. Experimental validation on both synthetic and real data is provided. In practice, only a few relaxations are needed for attaining the global optimum.", "The projections of world parallel lines in an image intersect at a single point called the vanishing point (VP). VPs are a key ingredient for various vision tasks including rotation estimation and 3D reconstruction. Urban environments generally exhibit some dominant orthogonal VPs. Given a set of lines extracted from a calibrated image, this paper aims to (1) determine the line clustering, i.e. find which line belongs to which VP, and (2) estimate the associated orthogonal VPs. None of the existing methods is fully satisfactory because of the inherent difficulties of the problem, such as the local minima and the chicken-and-egg aspect. In this paper, we present a new algorithm that solves the problem in a mathematically guaranteed globally optimal manner and can inherently enforce the VP orthogonality. Specifically, we formulate the task as a consensus set maximization problem over the rotation search space, and further solve it efficiently by a branch-and-bound procedure based on the Interval Analysis theory. Our algorithm has been validated successfully on sets of challenging real images as well as synthetic data sets.", "", "", "Supplying realistically textured 3D city models at ground level promises to be useful for pre-visualizing upcoming traffic situations in car navigation systems. Because this pre-visualization can be rendered from the expected future viewpoints of the driver, the required maneuver will be more easily understandable. 3D city models can be reconstructed from the imagery recorded by surveying vehicles. The vastness of image material gathered by these vehicles, however, puts extreme demands on vision algorithms to ensure their practical usability. Algorithms need to be as fast as possible and should result in compact, memory efficient 3D city models for future ease of distribution and visualization. For the considered application, these are not contradictory demands. Simplified geometry assumptions can speed up vision algorithms while automatically guaranteeing compact geometry models. In this paper, we present a novel city modeling framework which builds upon this philosophy to create 3D content at high speed. Objects in the environment, such as cars and pedestrians, may however disturb the reconstruction, as they violate the simplified geometry assumptions, leading to visually unpleasant artifacts and degrading the visual realism of the resulting 3D city model. Unfortunately, such objects are prevalent in urban scenes. We therefore extend the reconstruction framework by integrating it with an object recognition module that automatically detects cars in the input video streams and localizes them in 3D. The two components of our system are tightly integrated and benefit from each other's continuous input. 3D reconstruction delivers geometric scene context, which greatly helps improve detection precision. The detected car locations, on the other hand, are used to instantiate virtual placeholder models which augment the visual realism of the reconstructed city model.", "This paper is an argument for two assertions: First, that by representing correspondence probabilistically, drastically more correspondence information can be extracted from images. Second, that by increasing the amount of correspondence information used, more accurate egomotion estimation is possible. We present a novel approach illustrating these principles. We first present a framework for using Gabor filters to generate such correspondence probability distributions. Essentially, different filters 'vote' on the correct correspondence in a way giving their relative likelihoods. Next, we use the epipolar constraint to generate a probability distribution over the possible motions. As the amount of correspondence information is increased, the set of motions yielding significant probabilities is shown to 'shrink' to the correct motion.", "A framework is presented for estimating the pose of a camera based on images extracted from a single omnidirectional image of an urban scene, given a 2D map with building outlines with no 3D geometric information nor appearance data. The framework attempts to identify vertical corner edges of buildings in the query image, which we term VCLH, as well as the neighboring plane normals, through vanishing point analysis. A bottom-up process further groups VCLH into elemental planes and subsequently into 3D structural fragments modulo a similarity transformation. A geometric hashing lookup allows us to rapidly establish multiple candidate correspondences between the structural fragments and the 2D map building contours. A voting-based camera pose estimation method is then employed to recover the correspondences admitting a camera pose solution with high consensus. In a dataset that is even challenging for humans, the system returned a top-30 ranking for correct matches out of 3600 camera pose hypotheses (0.83 selectivity) for 50.9 of queries.", "", "The estimation of camera egomotion is an old problem in computer vision. Since the 1980s, many approaches based on both the discrete and the differential epipolar constraint have been proposed. The discrete case is used mainly in self-calibrated stereoscopic systems, whereas the differential case deals with a single moving camera. This article surveys several methods for 3D motion estimation unifying the mathematics convention which are then adapted to the common case of a mobile robot moving on a plane. Experimental results are given on synthetic data covering more than 0.5 million estimations. These surveyed algorithms have been programmed and are available on the Internet." ] }
1504.05445
2950822111
In this note, we provide a new characterization of Aldous' Brownian continuum random tree as the unique fixed point of a certain natural operation on continuum trees (which gives rise to a recursive distributional equation). We also show that this fixed point is attractive.
As mentioned in Subsection , Aldous @cite_24 shows that, in a sense, we can reverse'' the operator @math . Indeed, we can decompose a BCRT by picking three uniform points and splitting at the branch-point between them; we obtain three independent BCRT's, Brownian-rescaled by @math . Each of these subtrees is doubly marked, one mark being the original uniform point and the other being the former branch-point. Perhaps a more natural way of phrasing the reversal, which yields only a single mark in each subtree, would be to pick each of the branch-points in the tree with probability given by 6 times the product of the masses of the subtrees into which removal of that branch-point splits the tree.
{ "cite_N": [ "@cite_24" ], "mid": [ "2016099245" ], "abstract": [ "Improved insert teeth and holder assemblies are provided for a material breaker machine intended for use in reducing chunks or pieces of wood, met al and other materials to small size. The present invention includes an insert tooth member having a pair of edges, either of which may serve as a cutting edge. An insert tooth holder is provided for mounting of the insert tooth. The insert tooth and insert holder interengage through a raised portion on the one and a recessed portion on the other which mate to form a positive mechanical lock. The insert tooth is reversibly mounted on the insert holder to allow either of the pair of edges to assume the position of the cutting edge. The location of the interengaged components allows them to be fully protected from the material being cut, thus minimizing wear and damage. In one embodiment, the insert tooth is inclined at an angle with respect to the insert holder, thus providing relief against back-up of material being processed and allowing the material to feed more quickly." ] }
1504.04663
1744901186
Influential users have great potential for accelerating information dissemination and acquisition on Twitter. How to measure the influence of Twitter users has attracted significant academic and industrial attention. Existing influence measurement techniques are vulnerable to sybil users that are thriving on Twitter. Although sybil defenses for online social networks have been extensively investigated, they commonly assume unique mappings from human-established trust relationships to online social associations and thus do not apply to Twitter where users can freely follow each other. This paper presents TrueTop, the first sybil-resilient system to measure the influence of Twitter users. TrueTop is rooted in two observations from real Twitter datasets. First, although non-sybil users may incautiously follow strangers, they tend to be more careful and selective in retweeting, replying to, and mentioning other users. Second, influential users usually get much more retweets, replies, and mentions than non-influential users. Detailed theoretical studies and synthetic simulations show that TrueTop can generate very accurate influence measurement results with strong resilience to sybil attacks.
There is significant effort to explore social networks for effective sybil defenses in various distributed systems, such as SybilGuard @cite_3 and SybilLimit @cite_22 for P2P networks, SumUp @cite_18 for online voting systems, and SybilInfer @cite_0 , SybilDefense @cite_35 , and SybilRank @cite_39 for online social networks. A common assumption is that each node can be mapped into one in an undirected social network graph where every edge corresponds to a human-established trust relation. Although the attacker can create many sybil accounts, he cannot establish an arbitrarily large number of social trust relations with non-sybil users. Moreover, all schemes assume that the honest region is fast mixing and separate from the sybil region. Built upon these two key insights, these schemes conduct varying community detection methods @cite_43 to limit the number of sybil users admitted into or their impact in various application scenarios.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_22", "@cite_3", "@cite_39", "@cite_0", "@cite_43" ], "mid": [ "2118765159", "1587819022", "2110801527", "2101890615", "2168508162", "1551760018", "2153644028" ], "abstract": [ "Distributed systems without trusted identities are particularly vulnerable to sybil attacks, where an adversary creates multiple bogus identities to compromise the running of the system. This paper presents SybilDefender, a sybil defense mechanism that leverages the network topologies to defend against sybil attacks in social networks. Based on performing a limited number of random walks within the social graphs, SybilDefender is efficient and scalable to large social networks. Our experiments on two 3,000,000 node real-world social topologies show that SybilDefender outperforms the state of the art by one to two orders of magnitude in both accuracy and running time. SybilDefender can effectively identify the sybil nodes and detect the sybil community around a sybil node, even when the number of sybil nodes introduced by each attack edge is close to the theoretically detectable lower bound. Besides, we propose two approaches to limiting the number of attack edges in online social networks. The survey results of our Facebook application show that the assumption made by previous work that all the relationships in social networks are trusted does not apply to online social networks, and it is feasible to limit the number of attack edges in online social networks by relationship rating.", "Obtaining user opinion (using votes) is essential to ranking user-generated online content. However, any content voting system is susceptible to the Sybil attack where adversaries can out-vote real users by creating many Sybil identities. In this paper, we present SumUp, a Sybilresilient vote aggregation system that leverages the trust network among users to defend against Sybil attacks. SumUp uses the technique of adaptive vote flow aggregation to limit the number of bogus votes cast by adversaries to no more than the number of attack edges in the trust network (with high probability). Using user feed-back on votes, SumUp further restricts the voting power of adversaries who continuously misbehave to below the number of their attack edges. Using detailed evaluation of several existing social networks (YouTube, Flickr), we show SumUp's ability to handle Sybil attacks. By applying SumUp on the voting trace of Digg, a popular news voting site, we have found strong evidence of attack on many articles marked \"popular\" by Digg.", "Open-access distributed systems such as peer-to-peer systems are particularly vulnerable to sybil attacks, where a malicious user creates multiple fake identities (called sybil nodes). Without a trusted central authority that can tie identities to real human beings, defending against sybil attacks is quite challenging. Among the small number of decentralized approaches, our recent SybilGuard protocol leverages a key insight on social networks to bound the number of sybil nodes accepted. Despite its promising direction, SybilGuard can allow a large number of sybil nodes to be accepted. Furthermore, SybilGuard assumes that social networks are fast-mixing, which has never been confirmed in the real world. This paper presents the novel SybilLimit protocol that leverages the same insight as SybilGuard, but offers dramatically improved and near-optimal guarantees. The number of sybil nodes accepted is reduced by a factor of Θ(√n), or around 200 times in our experiments for a million-node system. We further prove that SybilLimit's guarantee is at most a log n factor away from optimal when considering approaches based on fast-mixing social networks. Finally, based on three large-scale real-world social networks, we provide the first evidence that real-world social networks are indeed fast-mixing. This validates the fundamental assumption behind SybilLimit's and SybilGuard's approach.", "Peer-to-peer and other decentralized,distributed systems are known to be particularly vulnerable to sybil attacks. In a sybil attack,a malicious user obtains multiple fake identities and pretends to be multiple, distinct nodes in the system. By controlling a large fraction of the nodes in the system,the malicious user is able to \"out vote\" the honest users in collaborative tasks such as Byzantine failure defenses. This paper presents SybilGuard, a novel protocol for limiting the corruptive influences of sybil attacks.Our protocol is based on the \"social network \"among user identities, where an edge between two identities indicates a human-established trust relationship. Malicious users can create many identities but few trust relationships. Thus, there is a disproportionately-small \"cut\" in the graph between the sybil nodes and the honest nodes. SybilGuard exploits this property to bound the number of identities a malicious user can create.We show the effectiveness of SybilGuard both analytically and experimentally.", "Users increasingly rely on the trustworthiness of the information exposed on Online Social Networks (OSNs). In addition, OSN providers base their business models on the marketability of this information. However, OSNs suffer from abuse in the form of the creation of fake accounts, which do not correspond to real humans. Fakes can introduce spam, manipulate online rating, or exploit knowledge extracted from the network. OSN operators currently expend significant resources to detect, manually verify, and shut down fake accounts. Tuenti, the largest OSN in Spain, dedicates 14 full-time employees in that task alone, incurring a significant monetary cost. Such a task has yet to be successfully automated because of the difficulty in reliably capturing the diverse behavior of fake and real OSN profiles. We introduce a new tool in the hands of OSN operators, which we call SybilRank. It relies on social graph properties to rank users according to their perceived likelihood of being fake (Sybils). SybilRank is computationally efficient and can scale to graphs with hundreds of millions of nodes, as demonstrated by our Hadoop prototype. We deployed SybilRank in Tuenti's operation center. We found that ∼90 of the 200K accounts that SybilRank designated as most likely to be fake, actually warranted suspension. On the other hand, with Tuenti's current user-report-based approach only ∼5 of the inspected accounts are indeed fake.", "SybilInfer is an algorithm for labelling nodes in a social network as honest users or Sybils controlled by an adversary. At the heart of SybilInfer lies a probabilistic model of honest social networks, and an inference engine that returns potential regions of dishonest nodes. The Bayesian inference approach to Sybil detection comes with the advantage label has an assigned probability, indicating its degree of certainty. We prove through analytical results as well as experiments on simulated and real-world network topologies that, given standard constraints on the adversary, SybilInfer is secure, in that it successfully distinguishes between honest and dishonest nodes and is not susceptible to manipulation by the adversary. Furthermore, our results show that SybilInfer outperforms state of the art algorithms, both in being more widely applicable, as well as providing vastly more accurate results.", "Recently, there has been much excitement in the research community over using social networks to mitigate multiple identity, or Sybil, attacks. A number of schemes have been proposed, but they differ greatly in the algorithms they use and in the networks upon which they are evaluated. As a result, the research community lacks a clear understanding of how these schemes compare against each other, how well they would work on real-world social networks with different structural properties, or whether there exist other (potentially better) ways of Sybil defense. In this paper, we show that, despite their considerable differences, existing Sybil defense schemes work by detecting local communities (i.e., clusters of nodes more tightly knit than the rest of the graph) around a trusted node. Our finding has important implications for both existing and future designs of Sybil defense schemes. First, we show that there is an opportunity to leverage the substantial amount of prior work on general community detection algorithms in order to defend against Sybils. Second, our analysis reveals the fundamental limits of current social network-based Sybil defenses: We demonstrate that networks with well-defined community structure are inherently more vulnerable to Sybil attacks, and that, in such networks, Sybils can carefully target their links in order make their attacks more effective." ] }
1504.04663
1744901186
Influential users have great potential for accelerating information dissemination and acquisition on Twitter. How to measure the influence of Twitter users has attracted significant academic and industrial attention. Existing influence measurement techniques are vulnerable to sybil users that are thriving on Twitter. Although sybil defenses for online social networks have been extensively investigated, they commonly assume unique mappings from human-established trust relationships to online social associations and thus do not apply to Twitter where users can freely follow each other. This paper presents TrueTop, the first sybil-resilient system to measure the influence of Twitter users. TrueTop is rooted in two observations from real Twitter datasets. First, although non-sybil users may incautiously follow strangers, they tend to be more careful and selective in retweeting, replying to, and mentioning other users. Second, influential users usually get much more retweets, replies, and mentions than non-influential users. Detailed theoretical studies and synthetic simulations show that TrueTop can generate very accurate influence measurement results with strong resilience to sybil attacks.
Recent measurement studies have questioned these two assumptions. Yang @cite_51 showed that sybil users on the Facebook-like Renren network can have their friend requests accepted by many non-sybil users. A similar result targeting Facebook was reported in @cite_25 . Blending sybil users into the non-sybil community would reduce the effectiveness of the existing sybil defenses @cite_20 . In addition, the work in @cite_8 @cite_19 @cite_16 @cite_24 @cite_2 showed that sybil users successfully acquired a number of followings from non-sybil users on Twitter. All these findings indicate that neither bidirectional friendships in Fackbook-like OSNs nor unidirectional followings in Twitter-like microblogging systems can be used as the trustable mirroring of real social relations. Moreover, it has been shown in @cite_6 @cite_50 that the mixing time of many practical and directed social graphs is much longer than previously expected. Since neither of the two key assumptions underlying the schemes in @cite_3 @cite_18 @cite_0 @cite_43 @cite_22 @cite_35 @cite_7 holds in directed networks such as Twitter, they are not directly applicable to our targeted scenario. Our TrueTop system does not rely on either assumption.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_22", "@cite_7", "@cite_8", "@cite_6", "@cite_3", "@cite_24", "@cite_19", "@cite_0", "@cite_43", "@cite_50", "@cite_2", "@cite_16", "@cite_51", "@cite_25", "@cite_20" ], "mid": [ "2118765159", "1587819022", "2110801527", "", "2100974526", "2127503167", "2101890615", "", "2005556331", "1551760018", "2153644028", "", "", "", "2092277251", "1992685726", "2045683786" ], "abstract": [ "Distributed systems without trusted identities are particularly vulnerable to sybil attacks, where an adversary creates multiple bogus identities to compromise the running of the system. This paper presents SybilDefender, a sybil defense mechanism that leverages the network topologies to defend against sybil attacks in social networks. Based on performing a limited number of random walks within the social graphs, SybilDefender is efficient and scalable to large social networks. Our experiments on two 3,000,000 node real-world social topologies show that SybilDefender outperforms the state of the art by one to two orders of magnitude in both accuracy and running time. SybilDefender can effectively identify the sybil nodes and detect the sybil community around a sybil node, even when the number of sybil nodes introduced by each attack edge is close to the theoretically detectable lower bound. Besides, we propose two approaches to limiting the number of attack edges in online social networks. The survey results of our Facebook application show that the assumption made by previous work that all the relationships in social networks are trusted does not apply to online social networks, and it is feasible to limit the number of attack edges in online social networks by relationship rating.", "Obtaining user opinion (using votes) is essential to ranking user-generated online content. However, any content voting system is susceptible to the Sybil attack where adversaries can out-vote real users by creating many Sybil identities. In this paper, we present SumUp, a Sybilresilient vote aggregation system that leverages the trust network among users to defend against Sybil attacks. SumUp uses the technique of adaptive vote flow aggregation to limit the number of bogus votes cast by adversaries to no more than the number of attack edges in the trust network (with high probability). Using user feed-back on votes, SumUp further restricts the voting power of adversaries who continuously misbehave to below the number of their attack edges. Using detailed evaluation of several existing social networks (YouTube, Flickr), we show SumUp's ability to handle Sybil attacks. By applying SumUp on the voting trace of Digg, a popular news voting site, we have found strong evidence of attack on many articles marked \"popular\" by Digg.", "Open-access distributed systems such as peer-to-peer systems are particularly vulnerable to sybil attacks, where a malicious user creates multiple fake identities (called sybil nodes). Without a trusted central authority that can tie identities to real human beings, defending against sybil attacks is quite challenging. Among the small number of decentralized approaches, our recent SybilGuard protocol leverages a key insight on social networks to bound the number of sybil nodes accepted. Despite its promising direction, SybilGuard can allow a large number of sybil nodes to be accepted. Furthermore, SybilGuard assumes that social networks are fast-mixing, which has never been confirmed in the real world. This paper presents the novel SybilLimit protocol that leverages the same insight as SybilGuard, but offers dramatically improved and near-optimal guarantees. The number of sybil nodes accepted is reduced by a factor of Θ(√n), or around 200 times in our experiments for a million-node system. We further prove that SybilLimit's guarantee is at most a log n factor away from optimal when considering approaches based on fast-mixing social networks. Finally, based on three large-scale real-world social networks, we provide the first evidence that real-world social networks are indeed fast-mixing. This validates the fundamental assumption behind SybilLimit's and SybilGuard's approach.", "", "Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We demonstrate a web service that tracks political memes in Twitter and helps detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We also present some cases of abusive behaviors uncovered by our service. Our web service is based on an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events.", "Social networks provide interesting algorithmic properties that can be used to bootstrap the security of distributed systems. For example, it is widely believed that social networks are fast mixing, and many recently proposed designs of such systems make crucial use of this property. However, whether real-world social networks are really fast mixing is not verified before, and this could potentially affect the performance of such systems based on the fast mixing property. To address this problem, we measure the mixing time of several social graphs, the time that it takes a random walk on the graph to approach the stationary distribution of that graph, using two techniques. First, we use the second largest eigenvalue modulus which bounds the mixing time. Second, we sample initial distributions and compute the random walk length required to achieve probability distributions close to the stationary distribution. Our findings show that the mixing time of social graphs is much larger than anticipated, and being used in literature, and this implies that either the current security systems based on fast mixing have weaker utility guarantees or have to be less efficient, with less security guarantees, in order to compensate for the slower mixing.", "Peer-to-peer and other decentralized,distributed systems are known to be particularly vulnerable to sybil attacks. In a sybil attack,a malicious user obtains multiple fake identities and pretends to be multiple, distinct nodes in the system. By controlling a large fraction of the nodes in the system,the malicious user is able to \"out vote\" the honest users in collaborative tasks such as Byzantine failure defenses. This paper presents SybilGuard, a novel protocol for limiting the corruptive influences of sybil attacks.Our protocol is based on the \"social network \"among user identities, where an edge between two identities indicates a human-established trust relationship. Malicious users can create many identities but few trust relationships. Thus, there is a disproportionately-small \"cut\" in the graph between the sybil nodes and the honest nodes. SybilGuard exploits this property to bound the number of identities a malicious user can create.We show the effectiveness of SybilGuard both analytically and experimentally.", "", "Recently, Twitter has emerged as a popular platform for discovering real-time information on the Web, such as news stories and people's reaction to them. Like the Web, Twitter has become a target for link farming, where users, especially spammers, try to acquire large numbers of follower links in the social network. Acquiring followers not only increases the size of a user's direct audience, but also contributes to the perceived influence of the user, which in turn impacts the ranking of the user's tweets by search engines. In this paper, we first investigate link farming in the Twitter network and then explore mechanisms to discourage the activity. To this end, we conducted a detailed analysis of links acquired by over 40,000 spammer accounts suspended by Twitter. We find that link farming is wide spread and that a majority of spammers' links are farmed from a small fraction of Twitter users, the social capitalists, who are themselves seeking to amass social capital and links by following back anyone who follows them. Our findings shed light on the social dynamics that are at the root of the link farming problem in Twitter network and they have important implications for future designs of link spam defenses. In particular, we show that a simple user ranking scheme that penalizes users for connecting to spammers can effectively address the problem by disincentivizing users from linking with other users simply to gain influence.", "SybilInfer is an algorithm for labelling nodes in a social network as honest users or Sybils controlled by an adversary. At the heart of SybilInfer lies a probabilistic model of honest social networks, and an inference engine that returns potential regions of dishonest nodes. The Bayesian inference approach to Sybil detection comes with the advantage label has an assigned probability, indicating its degree of certainty. We prove through analytical results as well as experiments on simulated and real-world network topologies that, given standard constraints on the adversary, SybilInfer is secure, in that it successfully distinguishes between honest and dishonest nodes and is not susceptible to manipulation by the adversary. Furthermore, our results show that SybilInfer outperforms state of the art algorithms, both in being more widely applicable, as well as providing vastly more accurate results.", "Recently, there has been much excitement in the research community over using social networks to mitigate multiple identity, or Sybil, attacks. A number of schemes have been proposed, but they differ greatly in the algorithms they use and in the networks upon which they are evaluated. As a result, the research community lacks a clear understanding of how these schemes compare against each other, how well they would work on real-world social networks with different structural properties, or whether there exist other (potentially better) ways of Sybil defense. In this paper, we show that, despite their considerable differences, existing Sybil defense schemes work by detecting local communities (i.e., clusters of nodes more tightly knit than the rest of the graph) around a trusted node. Our finding has important implications for both existing and future designs of Sybil defense schemes. First, we show that there is an opportunity to leverage the substantial amount of prior work on general community detection algorithms in order to defend against Sybils. Second, our analysis reveals the fundamental limits of current social network-based Sybil defenses: We demonstrate that networks with well-defined community structure are inherently more vulnerable to Sybil attacks, and that, in such networks, Sybils can carefully target their links in order make their attacks more effective.", "", "", "", "Sybil accounts are fake identities created to unfairly increase the power or resources of a single malicious user. Researchers have long known about the existence of Sybil accounts in online communities such as file-sharing systems, but they have not been able to perform large-scale measurements to detect them or measure their activities. In this article, we describe our efforts to detect, characterize, and understand Sybil account activity in the Renren Online Social Network (OSN). We use ground truth provided by Renren Inc. to build measurement-based Sybil detectors and deploy them on Renren to detect more than 100,000 Sybil accounts. Using our full dataset of 650,000 Sybils, we examine several aspects of Sybil behavior. First, we study their link creation behavior and find that contrary to prior conjecture, Sybils in OSNs do not form tight-knit communities. Next, we examine the fine-grained behaviors of Sybils on Renren using clickstream data. Third, we investigate behind-the-scenes collusion between large groups of Sybils. Our results reveal that Sybils with no explicit social ties still act in concert to launch attacks. Finally, we investigate enhanced techniques to identify stealthy Sybils. In summary, our study advances the understanding of Sybil behavior on OSNs and shows that Sybils can effectively avoid existing community-based Sybil detectors. We hope that our results will foster new research on Sybil detection that is based on novel types of Sybil features.", "Online Social Networks (OSNs) have become an integral part of today's Web. Politicians, celebrities, revolutionists, and others use OSNs as a podium to deliver their message to millions of active web users. Unfortunately, in the wrong hands, OSNs can be used to run astroturf campaigns to spread misinformation and propaganda. Such campaigns usually start off by infiltrating a targeted OSN on a large scale. In this paper, we evaluate how vulnerable OSNs are to a large-scale infiltration by socialbots: computer programs that control OSN accounts and mimic real users. We adopt a traditional web-based botnet design and built a Socialbot Network (SbN): a group of adaptive socialbots that are orchestrated in a command-and-control fashion. We operated such an SbN on Facebook---a 750 million user OSN---for about 8 weeks. We collected data related to users' behavior in response to a large-scale infiltration where socialbots were used to connect to a large number of Facebook users. Our results show that (1) OSNs, such as Facebook, can be infiltrated with a success rate of up to 80 , (2) depending on users' privacy settings, a successful infiltration can result in privacy breaches where even more users' data are exposed when compared to a purely public access, and (3) in practice, OSN security defenses, such as the Facebook Immune System, are not effective enough in detecting or stopping a large-scale infiltration as it occurs.", "A Sybil attack can inject many forged identities (called Sybils) to subvert a target system. Because of the severe damage that Sybil attacks can cause to a wide range of networking applications, there has been a proliferation of Sybil defense schemes. Of particular attention are those that explore the online social networks (OSNs) of users in a victim system in different ways. Unfortunately, while effective Sybil defense solutions are urgently needed, it is unclear how effective these OSN-based solutions are under different contexts. For example, all current approaches have focused on a common, classical scenario where it is difficult for an attacker to link Sybils with honest users and create attack edges; however, researchers have found recently that a modern scenario also becomes typical where an attacker can employ simple strategies to obtain many attack edges. In this work we analyze the state of OSN-based Sybil defenses. Our objective is not to design yet another solution, but rather to thoroughly analyze, measure, and compare how well or inadequate the well-known existing OSN-based approaches perform under both the classical scenario and the modern scenario. Although these approaches mostly perform well under the classical scenario, we find that under the modern scenario they are vulnerable to Sybil attacks. As shown in our quantitative analysis, very often a Sybil only needs a handful of attack edges to disguise itself as a benign node, and there is only a limited success in tolerating Sybils. Our study further points to capabilities a new solution must possess; in particular, in defense against Sybils under the modern scenario, we anticipate a new approach that enriches the structure of a social graph with more information about the relations between its users can work more effectively." ] }
1504.04663
1744901186
Influential users have great potential for accelerating information dissemination and acquisition on Twitter. How to measure the influence of Twitter users has attracted significant academic and industrial attention. Existing influence measurement techniques are vulnerable to sybil users that are thriving on Twitter. Although sybil defenses for online social networks have been extensively investigated, they commonly assume unique mappings from human-established trust relationships to online social associations and thus do not apply to Twitter where users can freely follow each other. This paper presents TrueTop, the first sybil-resilient system to measure the influence of Twitter users. TrueTop is rooted in two observations from real Twitter datasets. First, although non-sybil users may incautiously follow strangers, they tend to be more careful and selective in retweeting, replying to, and mentioning other users. Second, influential users usually get much more retweets, replies, and mentions than non-influential users. Detailed theoretical studies and synthetic simulations show that TrueTop can generate very accurate influence measurement results with strong resilience to sybil attacks.
As a special kind of sybil users, spammers in Twitter has attracted considerable attention in recent years. A common approach adopted by existing work @cite_17 @cite_40 @cite_30 @cite_28 @cite_46 @cite_11 @cite_36 @cite_5 is to detect spammers by measuring the behavioral difference between spammers and legitimate users. Spammers are a special type of sybil users, and the detection of general sybil users on Twitter remains an open challenge.
{ "cite_N": [ "@cite_30", "@cite_11", "@cite_28", "@cite_36", "@cite_40", "@cite_5", "@cite_46", "@cite_17" ], "mid": [ "2165701072", "1815362064", "2163764145", "2397135192", "1986678144", "146417747", "2407789839", "9223698" ], "abstract": [ "In this work we present a characterization of spam on Twitter. We find that 8 of 25 million URLs posted to the site point to phishing, malware, and scams listed on popular blacklists. We analyze the accounts that send spam and find evidence that it originates from previously legitimate accounts that have been compromised and are now being puppeteered by spammers. Using clickthrough data, we analyze spammers' use of features unique to Twitter and the degree that they affect the success of spam. We find that Twitter is a highly successful platform for coercing users to visit spam pages, with a clickthrough rate of 0.13 , compared to much lower rates previously reported for email spam. We group spam URLs into campaigns and identify trends that uniquely distinguish phishing, malware, and spam, to gain an insight into the underlying techniques used to attract users. Given the absence of spam filtering on Twitter, we examine whether the use of URL blacklists would help to significantly stem the spread of Twitter spam. Our results indicate that blacklists are too slow at identifying new threats, allowing more than 90 of visitors to view a page before it becomes blacklisted. We also find that even if blacklist delays were reduced, the use by spammers of URL shortening services for obfuscation negates the potential gains unless tools that use blacklists develop more sophisticated spam filtering.", "As web services such as Twitter, Facebook, Google, and Yahoo now dominate the daily activities of Internet users, cyber criminals have adapted their monetization strategies to engage users within these walled gardens. To facilitate access to these sites, an underground market has emerged where fraudulent accounts - automatically generated credentials used to perpetrate scams, phishing, and malware - are sold in bulk by the thousands. In order to understand this shadowy economy, we investigate the market for fraudulent Twitter accounts to monitor prices, availability, and fraud perpetrated by 27 merchants over the course of a 10-month period. We use our insights to develop a classifier to retroactively detect several million fraudulent accounts sold via this marketplace, 95 of which we disable with Twitter's help. During active months, the 27 merchants we monitor appeared responsible for registering 10-20 of all accounts later flagged for spam by Twitter, generating $127-459K for their efforts.", "On the heels of the widespread adoption of web services such as social networks and URL shorteners, scams, phishing, and malware have become regular threats. Despite extensive research, email-based spam filtering techniques generally fall short for protecting other web services. To better address this need, we present Monarch, a real-time system that crawls URLs as they are submitted to web services and determines whether the URLs direct to spam. We evaluate the viability of Monarch and the fundamental challenges that arise due to the diversity of web service spam. We show that Monarch can provide accurate, real-time protection, but that the underlying characteristics of spam do not generalize across web services. In particular, we find that spam targeting email qualitatively differs in significant ways from spam campaigns targeting Twitter. We explore the distinctions between email and Twitter spam, including the abuse of public web hosting and redirector services. Finally, we demonstrate Monarch's scalability, showing our system could protect a service such as Twitter -- which needs to process 15 million URLs day -- for a bit under $800 day.", "As social networking sites have risen in popularity, cyber-criminals started to exploit these sites to spread malware and to carry out scams. Previous work has extensively studied the use of fake (Sybil) accounts that attackers set up to distribute spam messages (mostly messages that contain links to scam pages or drive-by download sites). Fake accounts typically exhibit highly anomalous behavior, and hence, are relatively easy to detect. As a response, attackers have started to compromise and abuse legitimate accounts. Compromising legitimate accounts is very effective, as attackers can leverage the trust relationships that the account owners have established in the past. Moreover, compromised accounts are more difficult to clean up because a social network provider cannot simply delete the correspond-", "Social networking has become a popular way for users to meet and interact online. Users spend a significant amount of time on popular social network platforms (such as Facebook, MySpace, or Twitter), storing and sharing a wealth of personal information. This information, as well as the possibility of contacting thousands of users, also attracts the interest of cybercriminals. For example, cybercriminals might exploit the implicit trust relationships between users in order to lure victims to malicious websites. As another example, cybercriminals might find personal information valuable for identity theft or to drive targeted spam campaigns. In this paper, we analyze to which extent spam has entered social networks. More precisely, we analyze how spammers who target social networking sites operate. To collect the data about spamming activity, we created a large and diverse set of \"honey-profiles\" on three large social networking sites, and logged the kind of contacts and messages that they received. We then analyzed the collected data and identified anomalous behavior of users who contacted our profiles. Based on the analysis of this behavior, we developed techniques to detect spammers in social networks, and we aggregated their messages in large spam campaigns. Our results show that it is possible to automatically identify the accounts used by spammers, and our analysis was used for take-down efforts in a real-world social network. More precisely, during this study, we collaborated with Twitter and correctly detected and deleted 15,857 spam profiles.", "The availability of microblogging, like Twitter and Sina Weibo, makes it a popular platform for spammers to unfairly overpower normal users with unwanted content via social networks, known as social spamming. The rise of social spamming can significantly hinder the use of microblogging systems for effective information dissemination and sharing. Distinct features of microblogging systems present new challenges for social spammer detection. First, unlike traditional social networks, microblogging allows to establish some connections between two parties without mutual consent, which makes it easier for spammers to imitate normal users by quickly accumulating a large number of \"human\" friends. Second, microblogging messages are short, noisy, and unstructured. Traditional social spammer detection methods are not directly applicable to microblogging. In this paper, we investigate how to collectively use network and content information to perform effective social spammer detection in microblogging. In particular, we present an optimization formulation that models the social network and content information in a unified framework. Experiments on a real-world Twitter dataset demonstrate that our proposed method can effectively utilize both kinds of information for social spammer detection.", "Online social networks (OSNs) are extremely popular among Internet users. Unfortunately, in the wrong hands, they are also effective tools for executing spam campaigns. In this paper, we present an online spam filtering system that can be deployed as a component of the OSN platform to inspect messages generated by users in real-time. We propose to reconstruct spam messages into campaigns for classification rather than examine them individually. Although campaign identification has been used for offline spam analysis, we apply this technique to aid the online spam detection problem with sufficiently low overhead. Accordingly, our system adopts a set of novel features that effectively distinguish spam campaigns. It drops messages classified as “spam” before they reach the intended recipients, thus protecting them from various kinds of fraud. We evaluate the system using 187 million wall posts collected from Facebook and 17 million tweets collected from Twitter. In different parameter settings, the true positive rate reaches 80.9 while the false positive rate reaches 0.19 in the best case. In addition, it stays accurate for more than 9 months after the initial training phase. Once deployed, it can constantly secure the OSNs without the need for frequent re-training. Finally, tested on a server machine with eight cores (Xeon E5520 2.2Ghz) and 16GB memory, the system achieves an average throughput of 1580 messages sec and an average processing latency of 21.5ms on the Facebook dataset.", "With millions of users tweeting around the world, real time search systems and dierent types of mining tools are emerging to allow people tracking the repercussion of events and news on Twitter. However, although appealing as mechanisms to ease the spread of news and allow users to discuss events and post their status, these services open opportunities for new forms of spam. Trending topics, the most talked about items on Twitter at a given point in time, have been seen as an opportunity to generate trac and revenue. Spammers post tweets containing typical words of a trending topic and URLs, usually obfuscated by URL shorteners, that lead users to completely unrelated websites. This kind of spam can contribute to de-value real time search services unless mechanisms to fight and stop spammers can be found. In this paper we consider the problem of detecting spammers on Twitter. We first collected a large dataset of Twitter that includes more than 54 million users, 1.9 billion links, and almost 1.8 billion tweets. Using tweets related to three famous trending topics from 2009, we construct a large labeled collection of users, manually classified into spammers and non-spammers. We then identify a number of characteristics related to tweet content and user social behavior, which could potentially be used to detect spammers. We used these characteristics as attributes of machine learning process for classifying users as either spammers or nonspammers. Our strategy succeeds at detecting much of the spammers while only a small percentage of non-spammers are misclassified. Approximately 70 of spammers and 96 of non-spammers were correctly classified. Our results also highlight the most important attributes for spam detection on Twitter." ] }
1504.04663
1744901186
Influential users have great potential for accelerating information dissemination and acquisition on Twitter. How to measure the influence of Twitter users has attracted significant academic and industrial attention. Existing influence measurement techniques are vulnerable to sybil users that are thriving on Twitter. Although sybil defenses for online social networks have been extensively investigated, they commonly assume unique mappings from human-established trust relationships to online social associations and thus do not apply to Twitter where users can freely follow each other. This paper presents TrueTop, the first sybil-resilient system to measure the influence of Twitter users. TrueTop is rooted in two observations from real Twitter datasets. First, although non-sybil users may incautiously follow strangers, they tend to be more careful and selective in retweeting, replying to, and mentioning other users. Second, influential users usually get much more retweets, replies, and mentions than non-influential users. Detailed theoretical studies and synthetic simulations show that TrueTop can generate very accurate influence measurement results with strong resilience to sybil attacks.
There is a rich literature for influence measurement on Twitter. Cha @cite_44 found that the numbers of retweets and mentions serve as better metrics than the number of followers in measuring user influence. Bakshy @cite_31 proposed to measure user influence based on his ability to post the tweets that generates a cascade of retweets. TwitterRank @cite_34 combines link structure and topical similarity between Twitter users and uses a modified PageRank algorithm to calculate user influence. Pal and Counts @cite_4 also proposed a framework to identify topical authorities in microblogging systems. All these schemes are vulnerable to sybil users who can forge arbitrary information employed by these schemes for influence measurement. Moreover, many metrics used by these schemes have been incorporated into commercial influence measurement tools @cite_14 , and the vulnerability of representative tools to sybil attacks has been experimentally verified in @cite_33 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_33", "@cite_44", "@cite_31", "@cite_34" ], "mid": [ "", "1975583660", "2064494041", "1814023381", "1967579779", "2076219102" ], "abstract": [ "", "Content in microblogging systems such as Twitter is produced by tens to hundreds of millions of users. This diversity is a notable strength, but also presents the challenge of finding the most interesting and authoritative authors for any given topic. To address this, we first propose a set of features for characterizing social media authors, including both nodal and topical metrics. We then show how probabilistic clustering over this feature space, followed by a within-cluster ranking procedure, can yield a final list of top authors for a given topic. We present results across several topics, along with results from a user study confirming that our method finds authors who are significantly more interesting and authoritative than those resulting from several baseline conditions. Additionally our algorithm is computationally feasible in near real-time scenarios making it an attractive alternative for capturing the rapidly changing dynamics of microblogs.", "Online social networks (OSNs) are increasingly threatened by social bots which are software-controlled OSN accounts that mimic human users with malicious intentions. A social botnet refers to a group of social bots under the control of a single botmaster, which collaborate to conduct malicious behavior, while at the same time mimicking the interactions among normal OSN users to reduce their individual risk of being detected. We demonstrate the effectiveness and advantages of exploiting a social botnet for spam distribution and digital-influence manipulation through real experiments on Twitter and also trace-driven simulations. Our results can help understand the potentially detrimental effects of social botnets and help OSNs improve their bot(net) detection systems.", "Directed links in social media could represent anything from intimate friendships to common interests, or even a passion for breaking news or celebrity gossip. Such directed links determine the flow of information and hence indicate a user's influence on others — a concept that is crucial in sociology and viral marketing. In this paper, using a large amount of data collected from Twitter, we present an in-depth comparison of three measures of influence: indegree, retweets, and mentions. Based on these measures, we investigate the dynamics of user influence across topics and time. We make several interesting observations. First, popular users who have high indegree are not necessarily influential in terms of spawning retweets or mentions. Second, most influential users can hold significant influence over a variety of topics. Third, influence is not gained spontaneously or accidentally, but through concerted effort such as limiting tweets to a single topic. We believe that these findings provide new insights for viral marketing and suggest that topological measures such as indegree alone reveals very little about the influence of a user.", "In this paper we investigate the attributes and relative influence of 1.6M Twitter users by tracking 74 million diffusion events that took place on the Twitter follower graph over a two month interval in 2009. Unsurprisingly, we find that the largest cascades tend to be generated by users who have been influential in the past and who have a large number of followers. We also find that URLs that were rated more interesting and or elicited more positive feelings by workers on Mechanical Turk were more likely to spread. In spite of these intuitive results, however, we find that predictions of which particular user or URL will generate large cascades are relatively unreliable. We conclude, therefore, that word-of-mouth diffusion can only be harnessed reliably by targeting large numbers of potential influencers, thereby capturing average effects. Finally, we consider a family of hypothetical marketing strategies, defined by the relative cost of identifying versus compensating potential \"influencers.\" We find that although under some circumstances, the most influential users are also the most cost-effective, under a wide range of plausible assumptions the most cost-effective performance can be realized using \"ordinary influencers\"---individuals who exert average or even less-than-average influence.", "This paper focuses on the problem of identifying influential users of micro-blogging services. Twitter, one of the most notable micro-blogging services, employs a social-networking model called \"following\", in which each user can choose who she wants to \"follow\" to receive tweets from without requiring the latter to give permission first. In a dataset prepared for this study, it is observed that (1) 72.4 of the users in Twitter follow more than 80 of their followers, and (2) 80.5 of the users have 80 of users they are following follow them back. Our study reveals that the presence of \"reciprocity\" can be explained by phenomenon of homophily. Based on this finding, TwitterRank, an extension of PageRank algorithm, is proposed to measure the influence of users in Twitter. TwitterRank measures the influence taking both the topical similarity between users and the link structure into account. Experimental results show that TwitterRank outperforms the one Twitter currently uses and other related algorithms, including the original PageRank and Topic-sensitive PageRank." ] }
1504.04663
1744901186
Influential users have great potential for accelerating information dissemination and acquisition on Twitter. How to measure the influence of Twitter users has attracted significant academic and industrial attention. Existing influence measurement techniques are vulnerable to sybil users that are thriving on Twitter. Although sybil defenses for online social networks have been extensively investigated, they commonly assume unique mappings from human-established trust relationships to online social associations and thus do not apply to Twitter where users can freely follow each other. This paper presents TrueTop, the first sybil-resilient system to measure the influence of Twitter users. TrueTop is rooted in two observations from real Twitter datasets. First, although non-sybil users may incautiously follow strangers, they tend to be more careful and selective in retweeting, replying to, and mentioning other users. Second, influential users usually get much more retweets, replies, and mentions than non-influential users. Detailed theoretical studies and synthetic simulations show that TrueTop can generate very accurate influence measurement results with strong resilience to sybil attacks.
Also related is the research on modelling, measuring, and analyzing the interactions in OSNs, e.g., @cite_32 @cite_10 @cite_13 @cite_49 @cite_45 . Our work is the first to build a weighted directed interaction graph from historical incoming retweets, replies, and mentions on Twitter and use it for identifying influential users.
{ "cite_N": [ "@cite_10", "@cite_32", "@cite_45", "@cite_49", "@cite_13" ], "mid": [ "2117410972", "2124793767", "2033694966", "2110903679", "2047443612" ], "abstract": [ "Understanding how users behave when they connect to social networking sites creates opportunities for better interface design, richer studies of social interactions, and improved design of content distribution systems. In this paper, we present a first of a kind analysis of user workloads in online social networks. Our study is based on detailed clickstream data, collected over a 12-day period, summarizing HTTP sessions of 37,024 users who accessed four popular social networks: Orkut, MySpace, Hi5, and LinkedIn. The data were collected from a social network aggregator website in Brazil, which enables users to connect to multiple social networks with a single authentication. Our analysis of the clickstream data reveals key features of the social network workloads, such as how frequently people connect to social networks and for how long, as well as the types and sequences of activities that users conduct on these sites. Additionally, we crawled the social network topology of Orkut, so that we could analyze user interaction data in light of the social graph. Our data analysis suggests insights into how users interact with friends in Orkut, such as how frequently users visit their friends' or non-immediate friends' pages. In summary, our analysis demonstrates the power of using clickstream data in identifying patterns in social network workloads and social interactions. Our analysis shows that browsing, which cannot be inferred from crawling publicly available data, accounts for 92 of all user activities. Consequently, compared to using only crawled data, considering silent interactions like browsing friends' pages increases the measured level of interaction among users.", "Online social networking services are among the most popular Internet services according to Alexa.com and have become a key feature in many Internet services. Users interact through various features of online social networking services: making friend relationships, sharing their photos, and writing comments. These friend relationships are expected to become a key to many other features in web services, such as recommendation engines, security measures, online search, and personalization issues. However, we have very limited knowledge on how much interaction actually takes place over friend relationships declared online. A friend relationship only marks the beginning of online interaction. Does the interaction between users follow the declaration of friend relationship? Does a user interact evenly or lopsidedly with friends? We venture to answer these questions in this work. We construct a network from comments written in guestbooks. A node represents a user and a directed edge a comments from a user to another. We call this network an activity network. Previous work on activity networks include phone-call networks [34, 35] and MSN messenger networks [27]. To our best knowledge, this is the first attempt to compare the explicit friend relationship network and implicit activity network. We have analyzed structural characteristics of the activity network and compared them with the friends network. Though the activity network is weighted and directed, its structure is similar to the friend relationship network. We report that the in-degree and out-degree distributions are close to each other and the social interaction through the guestbook is highly reciprocated. When we consider only those links in the activity network that are reciprocated, the degree correlation distribution exhibits much more pronounced assortativity than the friends network and places it close to known social networks. The k-core analysis gives yet another corroborating evidence that the friends network deviates from the known social network and has an unusually large number of highly connected cores. We have delved into the weighted and directed nature of the activity network, and investigated the reciprocity, disparity, and network motifs. We also have observed that peer pressure to stay active online stops building up beyond a certain number of friends. The activity network has shown topological characteristics similar to the friends network, but thanks to its directed and weighted nature, it has allowed us more in-depth analysis of user interaction.", "This paper presents the design and implementation of Souche, a system that recognizes legitimate users early in online services. This early recognition contributes to both usability and security. Souche leverages social connections established over time. Legitimate users help identify other legitimate users through an implicit vouching process, strategically controlled within vouching trees. Souche is lightweight and fully transparent to users. In our evaluation on a real dataset of several hundred million users, Souche can efficiently identify 85 of legitimate users early, while reducing the percentage of falsely admitted malicious users from 44 to 2.4 . Our evaluation further indicates that Souche is robust in the presence of compromised accounts. It is generally applicable to enhance usability and security for a wide class of online services.", "Popular online social networks (OSNs) like Facebook and Twitter are changing the way users communicate and interact with the Internet. A deep understanding of user interactions in OSNs can provide important insights into questions of human social behavior, and into the design of social platforms and applications. However, recent studies have shown that a majority of user interactions on OSNs are latent interactions, passive actions such as profile browsing that cannot be observed by traditional measurement techniques. In this paper, we seek a deeper understanding of both visible and latent user interactions in OSNs. For quantifiable data on latent user interactions, we perform a detailed measurement study on Renren, the largest OSN in China with more than 150 million users to date. All friendship links in Renren are public, allowing us to exhaustively crawl a connected graph component of 42 million users and 1.66 billion social links in 2009. Renren also keeps detailed visitor logs for each user profile, and counters for each photo and diary blog entry. We capture detailed histories of profile visits over a period of 90 days for more than 61,000 users in the Peking University Renren network, and use statistics of profile visits to study issues of user profile popularity, reciprocity of profile visits, and the impact of content updates on user popularity. We find that latent interactions are much more prevalent and frequent than visible events, non-reciprocal in nature, and that profile popularity are uncorrelated with the frequency of content updates. Finally, we construct latent interaction graphs as models of user browsing behavior, and compare their structural properties against those of both visible interaction graphs and social graphs.", "Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone. This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs." ] }
1504.04818
2337086876
Efficient similarity retrieval from large-scale multimodal database is pervasive in modern search engines and social networks. To support queries across content modalities, the system should enable cross-modal correlation and computation-efficient indexing. While hashing methods have shown great potential in achieving this goal, current attempts generally fail to learn isomorphic hash codes in a seamless scheme, that is, they embed multiple modalities in a continuous isomorphic space and separately threshold embeddings into binary codes, which incurs substantial loss of retrieval accuracy. In this paper, we approach seamless multimodal hashing by proposing a novel Composite Correlation Quantization (CCQ) model. Specifically, CCQ jointly finds correlation-maximal mappings that transform different modalities into isomorphic latent space, and learns composite quantizers that convert the isomorphic latent features into compact binary codes. An optimization framework is devised to preserve both intra-modal similarity and inter-modal correlation through minimizing both reconstruction and quantization errors, which can be trained from both paired and partially paired data in linear time. A comprehensive set of experiments clearly show the superior effectiveness and efficiency of CCQ against the state of the art hashing methods for both unimodal and cross-modal retrieval.
Recently, hashing-based multimodal search is a prevalent research focus in machine learning and information retrieval communities @cite_5 @cite_32 @cite_4 @cite_26 @cite_18 @cite_8 @cite_20 @cite_7 @cite_16 @cite_31 @cite_25 , which enables approximate similarity search on multimedia database with significant speedup and acceptable accuracy. Refer to @cite_17 for a comprehensive survey.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_7", "@cite_8", "@cite_32", "@cite_5", "@cite_31", "@cite_16", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "1996219872", "2049993534", "2159373756", "2203543769", "1964073652", "199018803", "1970055505", "2245692474", "2267050401", "2949235290", "1979644923", "1870428314" ], "abstract": [ "Cross-media hashing, which conducts cross-media retrieval by embedding data from different modalities into a common low-dimensional Hamming space, has attracted intensive attention in recent years. The existing cross-media hashing approaches only aim at learning hash functions to preserve the intra-modality and inter-modality correlations, but do not directly capture the underlying semantic information of the multi-modal data. We propose a discriminative coupled dictionary hashing (DCDH) method in this paper. In DCDH, the coupled dictionary for each modality is learned with side information (e.g., categories). As a result, the coupled dictionaries not only preserve the intra-similarity and inter-correlation among multi-modal data, but also contain dictionary atoms that are semantically discriminative (i.e., the data from the same category is reconstructed by the similar dictionary atoms). To perform fast cross-media retrieval, we learn hash functions which map data from the dictionary space to a low-dimensional Hamming space. Besides, we conjecture that a balanced representation is crucial in cross-media retrieval. We introduce multi-view features on the relatively weak'' modalities into DCDH and extend it to multi-view DCDH (MV-DCDH) in order to enhance their representation capability. The experiments on two real-world data sets show that our DCDH and MV-DCDH outperform the state-of-the-art methods significantly on cross-media retrieval.", "In this paper, we present a new multimedia retrieval paradigm to innovate large-scale search of heterogenous multimedia data. It is able to return results of different media types from heterogeneous data sources, e.g., using a query image to retrieve relevant text documents or images from different data sources. This utilizes the widely available data from different sources and caters for the current users' demand of receiving a result list simultaneously containing multiple types of data to obtain a comprehensive understanding of the query's results. To enable large-scale inter-media retrieval, we propose a novel inter-media hashing (IMH) model to explore the correlations among multiple media types from different data sources and tackle the scalability issue. To this end, multimedia data from heterogeneous data sources are transformed into a common Hamming space, in which fast search can be easily implemented by XOR and bit-count operations. Furthermore, we integrate a linear regression model to learn hashing functions so that the hash codes for new data points can be efficiently generated. Experiments conducted on real-world large-scale multimedia datasets demonstrate the superiority of our proposed method compared with state-of-the-art techniques.", "Most existing cross-modal hashing methods suffer from the scalability issue in the training phase. In this paper, we propose a novel cross-modal hashing approach with a linear time complexity to the training data size, to enable scalable indexing for multimedia search across multiple modals. Taking both the intra-similarity in each modal and the inter-similarity across different modals into consideration, the proposed approach aims at effectively learning hash functions from large-scale training datasets. More specifically, for each modal, we first partition the training data into @math clusters and then represent each training data point with its distances to @math centroids of the clusters. Interestingly, such a k-dimensional data representation can reduce the time complexity of the training phase from traditional O(n2) or higher to O(n), where @math is the training data size, leading to practical learning on large-scale datasets. We further prove that this new representation preserves the intra-similarity in each modal. To preserve the inter-similarity among data points across different modals, we transform the derived data representations into a common binary subspace in which binary codes from all the modals are \"consistent\" and comparable. nThe transformation simultaneously outputs the hash functions for all modals, which are used to convert unseen data into binary codes. Given a query of one modal, it is first mapped into the binary codes using the modal's hash functions, followed by matching the database binary codes of any other modals. Experimental results on two benchmark datasets confirm the scalability and the effectiveness of the proposed approach in comparison with the state of the art.", "Due to its low storage cost and fast query speed, hashing has been widely adopted for similarity search in multimedia data. In particular, more and more attentions have been payed to multimodal hashing for search in multimedia data with multiple modalities, such as images with tags. Typically, supervised information of semantic labels is also available for the data points in many real applications. Hence, many supervised multimodal hashing (SMH) methods have been proposed to utilize such semantic labels to further improve the search accuracy. However, the training time complexity of most existing SMH methods is too high, which makes them unscalable to large-scale datasets. In this paper, a novel SMH method, called semantic correlation maximization (SCM), is proposed to seamlessly integrate semantic labels into the hashing learning procedure for large-scale data modeling. Experimental results on two real-world datasets show that SCM can significantly outperform the state-of-the-art SMH methods, in terms of both accuracy and scalability.", "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter @math is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.", "Many applications in Multilingual and Multimodal Information Access involve searching large databases of high dimensional data objects with multiple (conditionally independent) views. In this work we consider the problem of learning hash functions for similarity search across the views for such applications. We propose a principled method for learning a hash function for each view given a set of multiview training data objects. The hash functions map similar objects to similar codes across the views thus enabling cross-view similarity search. We present results from an extensive empirical study of the proposed approach which demonstrate its effectiveness on Japanese language People Search and Multilingual People Search problems.", "Visual understanding is often based on measuring similarity between observations. Learning similarities specific to a certain perception task from a set of examples has been shown advantageous in various computer vision and pattern recognition problems. In many important applications, the data that one needs to compare come from different representations or modalities, and the similarity between such data operates on objects that may have different and often incommensurable structure and dimensionality. In this paper, we propose a framework for supervised similarity learning based on embedding the input data from two arbitrary spaces into the Hamming space. The mapping is expressed as a binary classification problem with positive and negative examples, and can be efficiently learned using boosting algorithms. The utility and efficiency of such a generic approach is demonstrated on several challenging applications including cross-representation shape retrieval and alignment of multi-modal medical images.", "Hashing approach becomes popular for fast similarity search in many large scale applications. Real world data are usually with multiple modalities or having different representations from multiple sources. Various hashing methods have been proposed to generate compact binary codes from multi-modal data. However, most existing multimodal hashing techniques assume that each data example appears in all modalities, or at least there is one modality containing all data examples. But in real applications, it is often the case that every modality suffers from the missing of some data and therefore results in many partial examples, i.e., examples with some modalities missing. In this paper, we present a novel hashing approach to deal with Partial Multi-Modal data. In particular, the hashing codes are learned by simultaneously ensuring the data consistency among different modalities via latent subspace learning, and preserving data similarity within the same modality through graph Laplacian. We then further improve the codes via orthogonal rotation based on the orthogonal invariant property of our formulation. Experiments on two multi-modal datasets demonstrate the superior performance of the proposed approach over several state-of-the-art multi-modal hashing methods.", "Cross-modal hashing is designed to facilitate fast search across domains. In this work, we present a cross-modal hashing approach, called quantized correlation hashing (QCH), which takes into consideration the quantization loss over domains and the relation between domains. Unlike previous approaches that separate the optimization of the quantizer independent of maximization of domain correlation, our approach simultaneously optimizes both processes. The underlying relation between the domains that describes the same objects is established via maximizing the correlation between the hash codes across the domains. The resulting multi-modal objective function is transformed to a unimodal formalization, which is optimized through an alternative procedure. Experimental results on three real world datasets demonstrate that our approach outperforms the state-of-the-art multi-modal hashing methods.", "With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multilevel semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets.", "Cross media retrieval engines have gained massive popularity with rapid development of the Internet. Users may perform queries in a corpus consisting of audio, video, and textual information. To make such systems practically possible for large mount of multimedia data, two critical issues must be carefully considered: (a) reduce the storage as much as possible; (b) model the relationship of the heterogeneous media data. Recently academic community have proved that encoding the data into compact binary codes can drastically reduce the storage and computational cost. However, it is still unclear how to integrate multiple information sources properly into the binary code encoding scheme. In this paper, we study the cross media indexing problem by learning the discriminative hashing functions to map the multi-view datum into a shared hamming space. Not only meaningful within-view similarity is required to be preserved, we also incorporate the between-view correlations into the encoding scheme, where we map the similar points close together and push apart the dissimilar ones. To this end, we propose a novel hashing algorithm called Iterative Multi-View Hashing (IMVH) by taking these information into account simultaneously. To solve this joint optimization problem efficiently, we further develop an iterative scheme to deal with it by using a more flexible quantization model. In particular, an optimal alignment is learned to maintain the between-view similarity in the encoding scheme. And the binary codes are obtained by directly solving a series of binary label assignment problems without continuous relaxation to avoid the unnecessary quantization loss. In this way, the proposed algorithm not only greatly improves the retrieval accuracy but also performs strong robustness. An extensive set of experiments clearly demonstrates the superior performance of the proposed method against the state-of-the-art techniques on both multimodal and unimodal retrieval tasks.", "Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space." ] }
1504.04818
2337086876
Efficient similarity retrieval from large-scale multimodal database is pervasive in modern search engines and social networks. To support queries across content modalities, the system should enable cross-modal correlation and computation-efficient indexing. While hashing methods have shown great potential in achieving this goal, current attempts generally fail to learn isomorphic hash codes in a seamless scheme, that is, they embed multiple modalities in a continuous isomorphic space and separately threshold embeddings into binary codes, which incurs substantial loss of retrieval accuracy. In this paper, we approach seamless multimodal hashing by proposing a novel Composite Correlation Quantization (CCQ) model. Specifically, CCQ jointly finds correlation-maximal mappings that transform different modalities into isomorphic latent space, and learns composite quantizers that convert the isomorphic latent features into compact binary codes. An optimization framework is devised to preserve both intra-modal similarity and inter-modal correlation through minimizing both reconstruction and quantization errors, which can be trained from both paired and partially paired data in linear time. A comprehensive set of experiments clearly show the superior effectiveness and efficiency of CCQ against the state of the art hashing methods for both unimodal and cross-modal retrieval.
Existing multimodal hashing methods can be organized into two categories: supervised methods and unsupervised methods. CMSSH @cite_5 , SCM @cite_7 , QCH @cite_16 , and SePH @cite_23 are supervised hashing methods that require labeled pairs to indicate if the objects from different modalities are similar (positive) or dissimilar (negative). As supervised information is usually unavailable in many applications, the deployment of these methods may be severely restricted. CVH @cite_32 , IMH @cite_26 , MSAE @cite_19 and CorrAE @cite_8 are unsupervised hashing methods applicable to the most general multimodal retrieval case given that paired data are available, while our proposed CCQ model falls into this category. IMH @cite_26 is an extension of spectral hashing @cite_29 to multimodal data, which is restricted by the training burden since constructing and eigendecomposing the similarity matrices require @math . While CVH @cite_32 tackles the scalability issue, it does not jointly maximize cross-modality correlation and preserve intra-modality similarity. MSAE @cite_19 and CorrAE @cite_8 can capture both intra-modal similarity and inter-modal correlation by deep autoencoders, but they require spectral hashing or sign thresholding for obtaining binary codes from the continuous embeddings, which will give rise to uncontrollable quantization errors @cite_11 @cite_0 .
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_8", "@cite_29", "@cite_32", "@cite_0", "@cite_19", "@cite_23", "@cite_5", "@cite_16", "@cite_11" ], "mid": [ "2049993534", "2203543769", "1964073652", "", "199018803", "2124509324", "2251084241", "1922199343", "1970055505", "2267050401", "2084363474" ], "abstract": [ "In this paper, we present a new multimedia retrieval paradigm to innovate large-scale search of heterogenous multimedia data. It is able to return results of different media types from heterogeneous data sources, e.g., using a query image to retrieve relevant text documents or images from different data sources. This utilizes the widely available data from different sources and caters for the current users' demand of receiving a result list simultaneously containing multiple types of data to obtain a comprehensive understanding of the query's results. To enable large-scale inter-media retrieval, we propose a novel inter-media hashing (IMH) model to explore the correlations among multiple media types from different data sources and tackle the scalability issue. To this end, multimedia data from heterogeneous data sources are transformed into a common Hamming space, in which fast search can be easily implemented by XOR and bit-count operations. Furthermore, we integrate a linear regression model to learn hashing functions so that the hash codes for new data points can be efficiently generated. Experiments conducted on real-world large-scale multimedia datasets demonstrate the superiority of our proposed method compared with state-of-the-art techniques.", "Due to its low storage cost and fast query speed, hashing has been widely adopted for similarity search in multimedia data. In particular, more and more attentions have been payed to multimodal hashing for search in multimedia data with multiple modalities, such as images with tags. Typically, supervised information of semantic labels is also available for the data points in many real applications. Hence, many supervised multimodal hashing (SMH) methods have been proposed to utilize such semantic labels to further improve the search accuracy. However, the training time complexity of most existing SMH methods is too high, which makes them unscalable to large-scale datasets. In this paper, a novel SMH method, called semantic correlation maximization (SCM), is proposed to seamlessly integrate semantic labels into the hashing learning procedure for large-scale data modeling. Experimental results on two real-world datasets show that SCM can significantly outperform the state-of-the-art SMH methods, in terms of both accuracy and scalability.", "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter @math is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.", "", "Many applications in Multilingual and Multimodal Information Access involve searching large databases of high dimensional data objects with multiple (conditionally independent) views. In this work we consider the problem of learning hash functions for similarity search across the views for such applications. We propose a principled method for learning a hash function for each view given a set of multiview training data objects. The hash functions map similar objects to similar codes across the views thus enabling cross-view similarity search. We present results from an extensive empirical study of the proposed approach which demonstrate its effectiveness on Japanese language People Search and Multilingual People Search problems.", "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.", "Multi-modal retrieval is emerging as a new search paradigm that enables seamless information retrieval from various types of media. For example, users can simply snap a movie poster to search relevant reviews and trailers. To solve the problem, a set of mapping functions are learned to project high-dimensional features extracted from data of different media types into a common low-dimensional space so that metric distance measures can be applied. In this paper, we propose an effective mapping mechanism based on deep learning (i.e., stacked auto-encoders) for multi-modal retrieval. Mapping functions are learned by optimizing a new objective function, which captures both intra-modal and inter-modal semantic relationships of data from heterogeneous sources effectively. Compared with previous works which require a substantial amount of prior knowledge such as similarity matrices of intra-modal data and ranking examples, our method requires little prior knowledge. Given a large training dataset, we split it into mini-batches and continually adjust the mapping functions for each batch of input. Hence, our method is memory efficient with respect to the data volume. Experiments on three real datasets illustrate that our proposed method achieves significant improvement in search accuracy over the state-of-the-art methods.", "With benefits of low storage costs and high query speeds, hashing methods are widely researched for efficiently retrieving large-scale data, which commonly contains multiple views, e.g. a news report with images, videos and texts. In this paper, we study the problem of cross-view retrieval and propose an effective Semantics-Preserving Hashing method, termed SePH. Given semantic affinities of training data as supervised information, SePH transforms them into a probability distribution and approximates it with to-be-learnt hash codes in Hamming space via minimizing the Kullback-Leibler divergence. Then kernel logistic regression with a sampling strategy is utilized to learn the nonlinear projections from features in each view to the learnt hash codes. And for any unseen instance, predicted hash codes and their corresponding output probabilities from observed views are utilized to determine its unified hash code, using a novel probabilistic approach. Extensive experiments conducted on three benchmark datasets well demonstrate the effectiveness and reasonableness of SePH.", "Visual understanding is often based on measuring similarity between observations. Learning similarities specific to a certain perception task from a set of examples has been shown advantageous in various computer vision and pattern recognition problems. In many important applications, the data that one needs to compare come from different representations or modalities, and the similarity between such data operates on objects that may have different and often incommensurable structure and dimensionality. In this paper, we propose a framework for supervised similarity learning based on embedding the input data from two arbitrary spaces into the Hamming space. The mapping is expressed as a binary classification problem with positive and negative examples, and can be efficiently learned using boosting algorithms. The utility and efficiency of such a generic approach is demonstrated on several challenging applications including cross-representation shape retrieval and alignment of multi-modal medical images.", "Cross-modal hashing is designed to facilitate fast search across domains. In this work, we present a cross-modal hashing approach, called quantized correlation hashing (QCH), which takes into consideration the quantization loss over domains and the relation between domains. Unlike previous approaches that separate the optimization of the quantizer independent of maximization of domain correlation, our approach simultaneously optimizes both processes. The underlying relation between the domains that describes the same objects is established via maximizing the correlation between the hash codes across the domains. The resulting multi-modal objective function is transformed to a unimodal formalization, which is optimized through an alternative procedure. Experimental results on three real world datasets demonstrate that our approach outperforms the state-of-the-art multi-modal hashing methods.", "This paper addresses the problem of learning similarity-preserving binary codes for efficient retrieval in large-scale image collections. We propose a simple and efficient alternating minimization scheme for finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube. This method, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). Our experiments show that the resulting binary coding schemes decisively outperform several other state-of-the-art methods." ] }
1504.04818
2337086876
Efficient similarity retrieval from large-scale multimodal database is pervasive in modern search engines and social networks. To support queries across content modalities, the system should enable cross-modal correlation and computation-efficient indexing. While hashing methods have shown great potential in achieving this goal, current attempts generally fail to learn isomorphic hash codes in a seamless scheme, that is, they embed multiple modalities in a continuous isomorphic space and separately threshold embeddings into binary codes, which incurs substantial loss of retrieval accuracy. In this paper, we approach seamless multimodal hashing by proposing a novel Composite Correlation Quantization (CCQ) model. Specifically, CCQ jointly finds correlation-maximal mappings that transform different modalities into isomorphic latent space, and learns composite quantizers that convert the isomorphic latent features into compact binary codes. An optimization framework is devised to preserve both intra-modal similarity and inter-modal correlation through minimizing both reconstruction and quantization errors, which can be trained from both paired and partially paired data in linear time. A comprehensive set of experiments clearly show the superior effectiveness and efficiency of CCQ against the state of the art hashing methods for both unimodal and cross-modal retrieval.
A crucial problem with existing methods is that they essentially work in a separated two-step pipeline: first embed multimodal data into a common latent space and then threshold the continuous embeddings into binary codes of the Hamming space. Such conversion from real-valued features to discrete codes may result in substantial information loss, making the continuous latent space suboptimal for the binary codes and the binary codes suboptimal for retrieval @cite_1 . Furthermore, directly binarizing latent representation may lead to unbalanced encoding schemes, as shown in @cite_6 @cite_14 . Although IMVH @cite_20 learns multimodal hash functions using a graph-cut quantizer instead of the sign thresholding, the quantizer solves a fast approximation of energy function with orthogonal constraints and recurs large quantization error and unbalanced codes. CCQ approaches this problem by learning the modality-consistent latent space and balanced binary codes in a principled framework.
{ "cite_N": [ "@cite_20", "@cite_14", "@cite_1", "@cite_6" ], "mid": [ "1979644923", "2064797228", "138284169", "2171626009" ], "abstract": [ "Cross media retrieval engines have gained massive popularity with rapid development of the Internet. Users may perform queries in a corpus consisting of audio, video, and textual information. To make such systems practically possible for large mount of multimedia data, two critical issues must be carefully considered: (a) reduce the storage as much as possible; (b) model the relationship of the heterogeneous media data. Recently academic community have proved that encoding the data into compact binary codes can drastically reduce the storage and computational cost. However, it is still unclear how to integrate multiple information sources properly into the binary code encoding scheme. In this paper, we study the cross media indexing problem by learning the discriminative hashing functions to map the multi-view datum into a shared hamming space. Not only meaningful within-view similarity is required to be preserved, we also incorporate the between-view correlations into the encoding scheme, where we map the similar points close together and push apart the dissimilar ones. To this end, we propose a novel hashing algorithm called Iterative Multi-View Hashing (IMVH) by taking these information into account simultaneously. To solve this joint optimization problem efficiently, we further develop an iterative scheme to deal with it by using a more flexible quantization model. In particular, an optimal alignment is learned to maintain the between-view similarity in the encoding scheme. And the binary codes are obtained by directly solving a series of binary label assignment problems without continuous relaxation to avoid the unnecessary quantization loss. In this way, the proposed algorithm not only greatly improves the retrieval accuracy but also performs strong robustness. An extensive set of experiments clearly demonstrates the superior performance of the proposed method against the state-of-the-art techniques on both multimodal and unimodal retrieval tasks.", "In recent years, both hashing-based similarity search and multimodal similarity search have aroused much research interest in the data mining and other communities. While hashing-based similarity search seeks to address the scalability issue, multimodal similarity search deals with applications in which data of multiple modalities are available. In this paper, our goal is to address both issues simultaneously. We propose a probabilistic model, called multimodal latent binary embedding (MLBE), to learn hash functions from multimodal data automatically. MLBE regards the binary latent factors as hash codes in a common Hamming space. Given data from multiple modalities, we devise an efficient algorithm for the learning of binary latent factors which corresponds to hash function learning. Experimental validation of MLBE has been conducted using both synthetic data and two realistic data sets. Experimental results show that MLBE compares favorably with two state-of-the-art models.", "This paper presents a novel compact coding approach, composite quantization, for approximate nearest neighbor search. The idea is to use the composition of several elements selected from the dictionaries to accurately approximate a vector and to represent the vector by a short code composed of the indices of the selected elements. To efficiently compute the approximate distance of a query to a database vector using the short code, we introduce an extra constraint, constant inter-dictionary-element-product, resulting in that approximating the distance only using the distance of the query to each selected element is enough for nearest neighbor search. Experimental comparisonwith state-of-the-art algorithms over several benchmark datasets demonstrates the efficacy of the proposed approach.", "Hashing-based methods provide a very promising approach to large-scale similarity search. To obtain compact hash codes, a recent trend seeks to learn the hash functions from data automatically. In this paper, we study hash function learning in the context of multimodal data. We propose a novel multimodal hash function learning method, called Co-Regularized Hashing (CRH), based on a boosted co-regularization framework. The hash functions for each bit of the hash codes are learned by solving DC (difference of convex functions) programs, while the learning for multiple bits proceeds via a boosting procedure so that the bias introduced by the hash functions can be sequentially minimized. We empirically compare CRH with two state-of-the-art multimodal hash function learning methods on two publicly available data sets." ] }
1504.04871
2950200672
Most of the approaches for discovering visual attributes in images demand significant supervision, which is cumbersome to obtain. In this paper, we aim to discover visual attributes in a weakly supervised setting that is commonly encountered with contemporary image search engines. Deep Convolutional Neural Networks (CNNs) have enjoyed remarkable success in vision applications recently. However, in a weakly supervised scenario, widely used CNN training procedures do not learn a robust model for predicting multiple attribute labels simultaneously. The primary reason is that the attributes highly co-occur within the training data. To ameliorate this limitation, we propose Deep-Carving, a novel training procedure with CNNs, that helps the net efficiently carve itself for the task of multiple attribute prediction. During training, the responses of the feature maps are exploited in an ingenious way to provide the net with multiple pseudo-labels (for training images) for subsequent iterations. The process is repeated periodically after a fixed number of iterations, and enables the net carve itself iteratively for efficiently disentangling features. Additionally, we contribute a noun-adjective pairing inspired Natural Scenes Attributes Dataset to the research community, CAMIT - NSAD, containing a number of co-occurring attributes within a noun category. We describe, in detail, salient aspects of this dataset. Our experiments on CAMIT-NSAD and the SUN Attributes Dataset, with weak supervision, clearly demonstrate that the Deep-Carved CNNs consistently achieve considerable improvement in the precision of attribute prediction over popular baseline methods.
The main idea behind @cite_30 @cite_37 is to use an Entropy Minimization method to create low-density separation between the features obtained from deep stacked auto-encoders. Their work can be deemed to be nearest to our proposed approach; however, we do not deal with unlabelled data, and tend to follow a more comprehensive approach for attribute prediction. @cite_24 proposes a weakly supervised graph learning method for visual ranking of attributes, but the graph formulation is heavily dependent on the attribute co-occurrence statistics, which can often be inconsistent in practical scenarios. Researchers in @cite_15 attempt to leverage weak attributes in images for better image categorization, but expect all weak attributes in the training data to be labelled. Authors in @cite_39 solve the partial labelling problem, where a consideration set of labels is provided for a training image, out of which only one is correct. However, as depicted in Fig , each training image in our problem setting can have more than one correct (but unlabelled) attribute.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_39", "@cite_24", "@cite_15" ], "mid": [ "2145494108", "", "2158681777", "2119939615", "2124805727" ], "abstract": [ "We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. Our approach includes other approaches to the semi-supervised problem as particular or limiting cases. A series of experiments illustrates that the proposed solution benefits from unlabeled data. The method challenges mixture models when the data are sampled from the distribution class spanned by the generative model. The performances are definitely in favor of minimum entropy regularization when generative models are misspecified, and the weighting of unlabeled data provides robustness to the violation of the \"cluster assumption\". Finally, we also illustrate that the method can also be far superior to manifold learning in high dimension spaces.", "", "We address the problem of partially-labeled multiclass classification, where instead of a single label per instance, the algorithm is given a candidate set of labels, only one of which is correct. Our setting is motivated by a common scenario in many image and video collections, where only partial access to labels is available. The goal is to learn a classifier that can disambiguate the partially-labeled training instances, and generalize to unseen data. We define an intuitive property of the data distribution that sharply characterizes the ability to learn in this setting and show that effective learning is possible even when all the data is only partially labeled. Exploiting this property of the data, we propose a convex learning formulation based on minimization of a loss function appropriate for the partial label setting. We analyze the conditions under which our loss function is asymptotically consistent, as well as its generalization and transductive performance. We apply our framework to identifying faces culled from web news sources and to naming characters in TV series and movies; in particular, we annotated and experimented on a very large video data set and achieve 6 error for character naming on 16 episodes of the TV series Lost.", "Visual reranking has been widely deployed to refine the quality of conventional content-based image retrieval engines. The current trend lies in employing a crowd of retrieved results stemming from multiple feature modalities to boost the overall performance of visual reranking. However, a major challenge pertaining to current reranking methods is how to take full advantage of the complementary property of distinct feature modalities. Given a query image and one feature modality, a regular visual reranking framework treats the top-ranked images as pseudo positive instances which are inevitably noisy, difficult to reveal this complementary property, and thus lead to inferior ranking performance. This paper proposes a novel image reranking approach by introducing a Co-Regularized Multi-Graph Learning (Co-RMGL) framework, in which the intra-graph and inter-graph constraints are simultaneously imposed to encode affinities in a single graph and consistency across different graphs. Moreover, weakly supervised learning driven by image attributes is performed to denoise the pseudo-labeled instances, thereby highlighting the unique strength of individual feature modality. Meanwhile, such learning can yield a few anchors in graphs that vitally enable the alignment and fusion of multiple graphs. As a result, an edge weight matrix learned from the fused graph automatically gives the ordering to the initially retrieved results. We evaluate our approach on four benchmark image retrieval datasets, demonstrating a significant performance gain over the state-of-the-arts.", "Attribute-based query offers an intuitive way of image retrieval, in which users can describe the intended search targets with understandable attributes. In this paper, we develop a general and powerful framework to solve this problem by leveraging a large pool of weak attributes comprised of automatic classifier scores or other mid-level representations that can be easily acquired with little or no human labor. We extend the existing retrieval model of modeling dependency within query attributes to modeling dependency of query attributes on a large pool of weak attributes, which is more expressive and scalable. To efficiently learn such a large dependency model without overfitting, we further propose a semi-supervised graphical model to map each multiattribute query to a subset of weak attributes. Through extensive experiments over several attribute benchmarks, we demonstrate consistent and significant performance improvements over the state-of-the-art techniques. In addition, we compile the largest multi-attribute image retrieval dateset to date, including 126 fully labeled query attributes and 6,000 weak attributes of 0.26 million images." ] }
1504.04871
2950200672
Most of the approaches for discovering visual attributes in images demand significant supervision, which is cumbersome to obtain. In this paper, we aim to discover visual attributes in a weakly supervised setting that is commonly encountered with contemporary image search engines. Deep Convolutional Neural Networks (CNNs) have enjoyed remarkable success in vision applications recently. However, in a weakly supervised scenario, widely used CNN training procedures do not learn a robust model for predicting multiple attribute labels simultaneously. The primary reason is that the attributes highly co-occur within the training data. To ameliorate this limitation, we propose Deep-Carving, a novel training procedure with CNNs, that helps the net efficiently carve itself for the task of multiple attribute prediction. During training, the responses of the feature maps are exploited in an ingenious way to provide the net with multiple pseudo-labels (for training images) for subsequent iterations. The process is repeated periodically after a fixed number of iterations, and enables the net carve itself iteratively for efficiently disentangling features. Additionally, we contribute a noun-adjective pairing inspired Natural Scenes Attributes Dataset to the research community, CAMIT - NSAD, containing a number of co-occurring attributes within a noun category. We describe, in detail, salient aspects of this dataset. Our experiments on CAMIT-NSAD and the SUN Attributes Dataset, with weak supervision, clearly demonstrate that the Deep-Carved CNNs consistently achieve considerable improvement in the precision of attribute prediction over popular baseline methods.
We introduce a noun-adjective pairing inspired atural cenes ttributes ataset (CAMIT-NSAD) having a total of 22 pairs, with each noun category containing a number of co-occurring attributes. In terms of the number of images, the dataset is about three times bigger than the SUN Attributes dataset @cite_23 . We introduce , a novel training procedure with CNNs, that enables the net efficiently carve itself for the task of multiple attribute prediction.
{ "cite_N": [ "@cite_23" ], "mid": [ "2070148066" ], "abstract": [ "In this paper we present the first large-scale scene attribute database. First, we perform crowd-sourced human studies to find a taxonomy of 102 discriminative attributes. Next, we build the “SUN attribute database” on top of the diverse SUN categorical database. Our attribute database spans more than 700 categories and 14,000 images and has potential for use in high-level scene understanding and fine-grained scene recognition. We use our dataset to train attribute classifiers and evaluate how well these relatively simple classifiers can recognize a variety of attributes related to materials, surface properties, lighting, functions and affordances, and spatial envelope properties." ] }
1504.04596
1877923259
Relevance and diversity are both crucial criteria for an effective search system. In this paper, we propose a unified learning framework for simultaneously optimizing both relevance and diversity. Specifically, the problem is formalized as a structural learning framework optimizing Diversity-Correlated Evaluation Measures (DCEM), such as ERR-IA, a-NDCG and NRBP. Within this framework, the discriminant function is defined to be a bi-criteria objective maximizing the sum of the relevance scores and dissimilarities (or diversity) among the documents. Relevance and diversity features are utilized to define the relevance scores and dissimilarities, respectively. Compared with traditional methods, the advantages of our approach lie in that: (1) Directly optimizing DCEM as the loss function is more fundamental for the task; (2) Our framework does not rely on explicit diversity information such as subtopics, thus is more adaptive to real application; (3) The representation of diversity as the feature-based scoring function is more flexible to incorporate rich diversity-based features into the learning framework. Extensive experiments on the public TREC datasets show that our approach significantly outperforms state-of-the-art diversification approaches, which validate the above advantages.
Diversity-correlated methods can be mainly divided into two categories: implicit approaches and explicit approaches @cite_26 . The implicit methods assume that similar documents cover similar aspects and model inter-document dependencies. For example, Maximal Marginal Relevance (MMR) method @cite_10 proposes to iteratively select a candidate document with the highest similarity to the user query and the lowest similarity to the already selected documents, in order to promote novelty. In fact, most of the existing approaches are somehow inspired by the MMR method. @cite_1 select documents with high divergence from one language model to another based on the risk minimization consideration. The explicit methods explicitly model aspects of a query and then select documents that cover different aspects. The aspects of a user query can be achieved with a taxonomy @cite_21 @cite_18 @cite_19 , top retrieved documents @cite_22 , query reformulations @cite_14 @cite_20 , or multiple external resources @cite_17 . Overall, the explicit methods have shown better experimental performances comparing with implicit methods.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_14", "@cite_22", "@cite_21", "@cite_1", "@cite_19", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "", "2150662514", "2100419635", "2104373244", "1993320088", "2197919320", "1969077952", "2083305840", "2023188792", "" ], "abstract": [ "", "Queries submitted to a retrieval system are often ambiguous. In such a situation, a sensible strategy is to diversify the ranking of results to be retrieved, in the hope that users will find at least one of these results to be relevant to their information need. In this paper, we introduce xQuAD, a novel framework for search result diversification that builds such a diversified ranking by explicitly accounting for the relationship between documents retrieved for the original query and the possible aspects underlying this query, in the form of sub-queries. We evaluate the effectiveness of xQuAD using a standard TREC collection. The results show that our framework markedly outperforms state-of-the-art diversification approaches under a simulated best-case scenario. Moreover, we show that its effectiveness can be further improved by estimating the relative importance of each identified sub-query. Finally, we show that our framework can still outperform the simulated best-case scenario of the state-of-the-art diversification approaches using sub-queries automatically derived from the baseline document ranking itself.", "We present and evaluate methods for diversifying search results to improve personalized web search. A common personalization approach involves reranking the top N search results such that documents likely to be preferred by the user are presented higher. The usefulness of reranking is limited in part by the number and diversity of results considered. We propose three methods to increase the diversity of the top results and evaluate the effectiveness of these methods.", "Traditional models of information retrieval assume documents are independently relevant. But when the goal is retrieving diverse or novel information about a topic, retrieval models need to capture dependencies between documents. Such tasks require alternative evaluation and optimization methods that operate on different types of relevance judgments. We define faceted topic retrieval as a particular novelty-driven task with the goal of finding a set of documents that cover the different facets of an information need. A faceted topic retrieval system must be able to cover as many facets as possible with the smallest number of documents. We introduce two novel models for faceted topic retrieval, one based on pruning a set of retrieved documents and one based on retrieving sets of documents through direct optimization of evaluation measures. We compare the performance of our models to MMR and the probabilistic model due to on a set of 60 topics annotated with facets, showing that our models are competitive.", "We study the problem of answering ambiguous web queries in a setting where there exists a taxonomy of information, and that both queries and documents may belong to more than one category according to this taxonomy. We present a systematic approach to diversifying results that aims to minimize the risk of dissatisfaction of the average user. We propose an algorithm that well approximates this objective in general, and is provably optimal for a natural special case. Furthermore, we generalize several classical IR metrics, including NDCG, MRR, and MAP, to explicitly account for the value of diversification. We demonstrate empirically that our algorithm scores higher in these generalized metrics compared to results produced by commercial search engines.", "We present a non-traditional retrieval problem we call subtopic retrieval. The subtopic retrieval problem is concerned with finding documents that cover many different subtopics of a query topic. In such a problem, the utility of a document in a ranking is dependent on other documents in the ranking, violating the assumption of independent relevance which is assumed in most traditional retrieval methods. Subtopic retrieval poses challenges for evaluating performance, as well as for developing effective algorithms. We propose a framework for evaluating subtopic retrieval which generalizes the traditional precision and recall metrics by accounting for intrinsic topic difficulty as well as redundancy in documents. We propose and systematically evaluate several methods for performing subtopic retrieval using statistical language models and a maximal marginal relevance (MMR) ranking strategy. A mixture model combined with query likelihood relevance ranking is shown to modestly outperform a baseline relevance ranking on a data set used in the TREC interactive track.", "The intent-oriented search diversification methods developed in the field so far tend to build on generative views of the retrieval system to be diversified. Core algorithm components in particular redundancy assessment are expressed in terms of the probability to observe documents, rather than the probability that the documents be relevant. This has been sometimes described as a view considering the selection of a single document in the underlying task model. In this paper we propose an alternative formulation of aspect-based diversification algorithms which explicitly includes a formal relevance model. We develop means for the effective computation of the new formulation, and we test the resulting algorithm empirically. We report experiments on search and recommendation tasks showing competitive or better performance than the original diversification algorithms. The relevance-based formulation has further interesting properties, such as unifying two well-known state of the art algorithms into a single version. The relevance-based approach opens alternative possibilities for further formal connections and developments as natural extensions of the framework. We illustrate this by modeling tolerance to redundancy as an explicit configurable parameter, which can be set to better suit the characteristics of the IR task, or the evaluation metrics, as we illustrate empirically.", "This paper presents a method for combining query-relevance with information-novelty in the context of text retrieval and summarization. The Maximal Marginal Relevance (MMR) criterion strives to reduce redundancy while maintaining query relevance in re-ranking retrieved documents and in selecting apprw priate passages for text summarization. Preliminary results indicate some benefits for MMR diversity ranking in document retrieval and in single document summarization. The latter are borne out by the recent results of the SUMMAC conference in the evaluation of summarization systems. However, the clearest advantage is demonstrated in constructing non-redundant multi-document summaries, where MMR results are clearly superior to non-MMR passage selection.", "When a Web user's underlying information need is not clearly specified from the initial query, an effective approach is to diversify the results retrieved for this query. In this paper, we introduce a novel probabilistic framework for Web search result diversification, which explicitly accounts for the various aspects associated to an underspecified query. In particular, we diversify a document ranking by estimating how well a given document satisfies each uncovered aspect and the extent to which different aspects are satisfied by the ranking as a whole. We thoroughly evaluate our framework in the context of the diversity task of the TREC 2009 Web track. Moreover, we exploit query reformulations provided by three major Web search engines (WSEs) as a means to uncover different query aspects. The results attest the effectiveness of our framework when compared to state-of-the-art diversification approaches in the literature. Additionally, by simulating an upper-bound query reformulation mechanism from official TREC data, we draw useful insights regarding the effectiveness of the query reformulations generated by the different WSEs in promoting diversity.", "" ] }
1504.04596
1877923259
Relevance and diversity are both crucial criteria for an effective search system. In this paper, we propose a unified learning framework for simultaneously optimizing both relevance and diversity. Specifically, the problem is formalized as a structural learning framework optimizing Diversity-Correlated Evaluation Measures (DCEM), such as ERR-IA, a-NDCG and NRBP. Within this framework, the discriminant function is defined to be a bi-criteria objective maximizing the sum of the relevance scores and dissimilarities (or diversity) among the documents. Relevance and diversity features are utilized to define the relevance scores and dissimilarities, respectively. Compared with traditional methods, the advantages of our approach lie in that: (1) Directly optimizing DCEM as the loss function is more fundamental for the task; (2) Our framework does not rely on explicit diversity information such as subtopics, thus is more adaptive to real application; (3) The representation of diversity as the feature-based scoring function is more flexible to incorporate rich diversity-based features into the learning framework. Extensive experiments on the public TREC datasets show that our approach significantly outperforms state-of-the-art diversification approaches, which validate the above advantages.
There are also some other methods which attempt to borrow theories from economical or political domains @cite_13 @cite_24 @cite_4 . The work in @cite_13 @cite_24 applies economical portfolio theory for search result ranking, which views search diversification as a means of risk minimization. The approach in @cite_4 treats the problem of finding a diverse search result as finding a proportional representation for the document ranking, which is like a critical part of most electoral processes.
{ "cite_N": [ "@cite_24", "@cite_13", "@cite_4" ], "mid": [ "2107126505", "1980730196", "2157391629" ], "abstract": [ "Result diversity is a topic of great importance as more facets of queries are discovered and users expect to find their desired facets in the first page of the results. However, the underlying questions of how 'diversity' interplays with 'quality' and when preference should be given to one or both are not well-understood. In this work, we model the problem as expectation maximization and study the challenges of estimating the model parameters and reaching an equilibrium. One model parameter, for example, is correlations between pages which we estimate using textual contents of pages and click data (when available). We conduct experiments on diversifying randomly selected queries from a query log and the queries chosen from the disambiguation topics of Wikipedia. Our algorithm improves upon Google in terms of the diversity of random queries, retrieving 14 to 38 more aspects of queries in top 5, while maintaining a precision very close to Google. On a more selective set of queries that are expected to benefit from diversification, our algorithm improves upon Google in terms of precision and diversity of the results, and significantly outperforms another baseline system for result diversification.", "This paper studies document ranking under uncertainty. It is tackled in a general situation where the relevance predictions of individual documents have uncertainty, and are dependent between each other. Inspired by the Modern Portfolio Theory, an economic theory dealing with investment in financial markets, we argue that ranking under uncertainty is not just about picking individual relevant documents, but about choosing the right combination of relevant documents. This motivates us to quantify a ranked list of documents on the basis of its expected overall relevance (mean) and its variance; the latter serves as a measure of risk, which was rarely studied for document ranking in the past. Through the analysis of the mean and variance, we show that an optimal rank order is the one that balancing the overall relevance (mean) of the ranked list against its risk level (variance). Based on this principle, we then derive an efficient document ranking algorithm. It generalizes the well-known probability ranking principle (PRP) by considering both the uncertainty of relevance predictions and correlations between retrieved documents. Moreover, the benefit of diversification is mathematically quantified; we show that diversifying documents is an effective way to reduce the risk of document ranking. Experimental results in text retrieval confirm performance.", "This paper presents a different perspective on diversity in search results: diversity by proportionality. We consider a result list most diverse, with respect to some set of topics related to the query, when the number of documents it provides on each topic is proportional to the topic's popularity. Consequently, we propose a framework for optimizing proportionality for search result diversification, which is motivated by the problem of assigning seats to members of competing political parties. Our technique iteratively determines, for each position in the result ranked list, the topic that best maintains the overall proportionality. It then selects the best document on this topic for this position. We demonstrate empirically that our method significantly outperforms the top performing approach in the literature not only on our proposed metric for proportionality, but also on several standard diversity measures. This result indicates that promoting proportionality naturally leads to minimal redundancy, which is a goal of the current diversity approaches." ] }
1504.04596
1877923259
Relevance and diversity are both crucial criteria for an effective search system. In this paper, we propose a unified learning framework for simultaneously optimizing both relevance and diversity. Specifically, the problem is formalized as a structural learning framework optimizing Diversity-Correlated Evaluation Measures (DCEM), such as ERR-IA, a-NDCG and NRBP. Within this framework, the discriminant function is defined to be a bi-criteria objective maximizing the sum of the relevance scores and dissimilarities (or diversity) among the documents. Relevance and diversity features are utilized to define the relevance scores and dissimilarities, respectively. Compared with traditional methods, the advantages of our approach lie in that: (1) Directly optimizing DCEM as the loss function is more fundamental for the task; (2) Our framework does not rely on explicit diversity information such as subtopics, thus is more adaptive to real application; (3) The representation of diversity as the feature-based scoring function is more flexible to incorporate rich diversity-based features into the learning framework. Extensive experiments on the public TREC datasets show that our approach significantly outperforms state-of-the-art diversification approaches, which validate the above advantages.
Recently, some researchers have proposed to utilize machine learning techniques to solve the diversification problem. @cite_27 propose to optimize subtopic coverage as the loss function, and formulate a discriminant function based on maximizing word coverage. However, their work only focuses on diversity, and discards the requirements of relevance. They claim that modeling both relevance and diversity simultaneously is a more challenging problem, which is exactly what we try to tackle in this paper. @cite_33 propose a R-LTR model to solve the diversification problem, which comes from a sequential ranking process. Our work in this paper try to solve the diversification problem from a discriminative view, which is a unified framework. The authors of @cite_16 @cite_36 try to construct a dynamic ranked-retrieval model, which may be useful in the user interface designing of future retrieval system. Our paper focuses on the common static ranking scenario, which is different from their papers.
{ "cite_N": [ "@cite_36", "@cite_27", "@cite_16", "@cite_33" ], "mid": [ "2050419915", "2104895009", "2117476371", "" ], "abstract": [ "For ambiguous queries, conventional retrieval systems are bound by two conflicting goals. On the one hand, they should diversify and strive to present results for as many query intents as possible. On the other hand, they should provide depth for each intent by displaying more than a single result. Since both diversity and depth cannot be achieved simultaneously in the conventional static retrieval model, we propose a new dynamic ranking approach. In particular, our proposed two-level dynamic ranking model allows users to adapt the ranking through interaction, thus overcoming the constraints of presenting a one-size-fits-all static ranking. In this model, a user's interactions with the first-level ranking are used to infer this user's intent, so that second-level rankings can be inserted to provide more results relevant to this intent. Unlike previous dynamic ranking models, we provide an algorithm to efficiently compute dynamic rankings with provable approximation guarantees. We also propose the first principled algorithm for learning dynamic ranking functions from training data. In addition to the theoretical results, we provide empirical evidence demonstrating the gains in retrieval quality over conventional approaches.", "In many retrieval tasks, one important goal involves retrieving a diverse set of results (e.g., documents covering a wide range of topics for a search query). First of all, this reduces redundancy, effectively showing more information with the presented results. Secondly, queries are often ambiguous at some level. For example, the query \"Jaguar\" can refer to many different topics (such as the car or feline). A set of documents with high topic diversity ensures that fewer users abandon the query because no results are relevant to them. Unlike existing approaches to learning retrieval functions, we present a method that explicitly trains to diversify results. In particular, we formulate the learning problem of predicting diverse subsets and derive a training method based on structural SVMs.", "We present a theoretically well-founded retrieval model for dynamically generating rankings based on interactive user feedback. Unlike conventional rankings that remain static after the query was issued, dynamic rankings allow and anticipate user activity, thus providing a way to combine the otherwise contradictory goals of result diversification and high recall. We develop a decision-theoretic framework to guide the design and evaluation of algorithms for this interactive retrieval setting. Furthermore, we propose two dynamic ranking algorithms, both of which are computationally efficient. We prove that these algorithms provide retrieval performance that is guaranteed to be at least as good as the optimal static ranking algorithm. In empirical evaluations, dynamic ranking shows substantial improvements in retrieval performance over conventional static rankings.", "" ] }
1504.04596
1877923259
Relevance and diversity are both crucial criteria for an effective search system. In this paper, we propose a unified learning framework for simultaneously optimizing both relevance and diversity. Specifically, the problem is formalized as a structural learning framework optimizing Diversity-Correlated Evaluation Measures (DCEM), such as ERR-IA, a-NDCG and NRBP. Within this framework, the discriminant function is defined to be a bi-criteria objective maximizing the sum of the relevance scores and dissimilarities (or diversity) among the documents. Relevance and diversity features are utilized to define the relevance scores and dissimilarities, respectively. Compared with traditional methods, the advantages of our approach lie in that: (1) Directly optimizing DCEM as the loss function is more fundamental for the task; (2) Our framework does not rely on explicit diversity information such as subtopics, thus is more adaptive to real application; (3) The representation of diversity as the feature-based scoring function is more flexible to incorporate rich diversity-based features into the learning framework. Extensive experiments on the public TREC datasets show that our approach significantly outperforms state-of-the-art diversification approaches, which validate the above advantages.
There are also some on-line learning methods that try to learn retrieval models by exploiting user click data or implicit user feedback @cite_7 @cite_31 @cite_37 . These research work can tackle diversity problem to some extent, but they focus on an on-line' or interactive' scenario, which is different from our work. For example, @cite_37 propose an online algorithm that presents a ranking to users at each step, and observes the set of documents the user reads in the presented ranking, and then updates its model. While in our work, we try to utilize human labels conveying relevance and subtopic information to learn an optimal off-line' retrieval model. The most representative scenario is the of Web Track in TREC. In fact, the two modes (i.e. off-line' and on-line') are complementary in practical applications. People usually utilize historical human labeled data to train an optimal retrieval model, and then use on-line user feedback to update the retrieval model dynamically. We may investigate on-line algorithms in our future work.
{ "cite_N": [ "@cite_31", "@cite_37", "@cite_7" ], "mid": [ "2963773169", "2088121730", "2023599408" ], "abstract": [ "", "In order to minimize redundancy and optimize coverage of multiple user interests, search engines and recommender systems aim to diversify their set of results. To date, these diversification mechanisms are largely hand-coded or relied on expensive training data provided by experts. To overcome this problem, we propose an online learning model and algorithms for learning diversified recommendations and retrieval functions from implicit feedback. In our model, the learning algorithm presents a ranking to the user at each step, and uses the set of documents from the presented ranking, which the user reads, as feedback. Even for imperfect and noisy feedback, we show that the algorithms admit theoretical guarantees for maximizing any submodular utility measure under approximately rational user behavior. In addition to the theoretical results, we find that the algorithm learns quickly, accurately, and robustly in empirical evaluations on two datasets.", "Algorithms for learning to rank Web documents usually assume a document's relevance is independent of other documents. This leads to learned ranking functions that produce rankings with redundant results. In contrast, user studies have shown that diversity at high ranks is often preferred. We present two online learning algorithms that directly learn a diverse ranking of documents based on users' clicking behavior. We show that these algorithms minimize abandonment, or alternatively, maximize the probability that a relevant document is found in the top k positions of a ranking. Moreover, one of our algorithms asymptotically achieves optimal worst-case performance even if users' interests change." ] }
1504.04596
1877923259
Relevance and diversity are both crucial criteria for an effective search system. In this paper, we propose a unified learning framework for simultaneously optimizing both relevance and diversity. Specifically, the problem is formalized as a structural learning framework optimizing Diversity-Correlated Evaluation Measures (DCEM), such as ERR-IA, a-NDCG and NRBP. Within this framework, the discriminant function is defined to be a bi-criteria objective maximizing the sum of the relevance scores and dissimilarities (or diversity) among the documents. Relevance and diversity features are utilized to define the relevance scores and dissimilarities, respectively. Compared with traditional methods, the advantages of our approach lie in that: (1) Directly optimizing DCEM as the loss function is more fundamental for the task; (2) Our framework does not rely on explicit diversity information such as subtopics, thus is more adaptive to real application; (3) The representation of diversity as the feature-based scoring function is more flexible to incorporate rich diversity-based features into the learning framework. Extensive experiments on the public TREC datasets show that our approach significantly outperforms state-of-the-art diversification approaches, which validate the above advantages.
In the early stage, @cite_1 define a number of subtopic recall metrics to measure diversity. Recently, many evaluation measures based on cascade models have been proposed, such as @math -NDCG @cite_28 , ERR-IA @cite_11 , and NRBP @cite_15 . They measure the diversity of a result list by explicitly rewarding novelty and penalizing redundancy observed at every rank. In the meantime, @cite_21 also propose a series of versions of the traditional measures, such as MAP-IA, Precision-IA. The traditional measures are applied to each subtopic independently and then combined together. More recently, Sakai and Song compare a wide range of diversified IR metrics, and propose a series of @math measures which have high discriminative power @cite_25 @cite_34 . Interestingly, a novel proportionality measure called CPR (Cumulative Proportionality measure) has been proposed @cite_4 , which captures proportionality in search results.
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_21", "@cite_1", "@cite_15", "@cite_34", "@cite_25", "@cite_11" ], "mid": [ "2157391629", "2132314908", "1993320088", "2197919320", "1498594556", "1988619666", "2058624977", "2113640060" ], "abstract": [ "This paper presents a different perspective on diversity in search results: diversity by proportionality. We consider a result list most diverse, with respect to some set of topics related to the query, when the number of documents it provides on each topic is proportional to the topic's popularity. Consequently, we propose a framework for optimizing proportionality for search result diversification, which is motivated by the problem of assigning seats to members of competing political parties. Our technique iteratively determines, for each position in the result ranked list, the topic that best maintains the overall proportionality. It then selects the best document on this topic for this position. We demonstrate empirically that our method significantly outperforms the top performing approach in the literature not only on our proposed metric for proportionality, but also on several standard diversity measures. This result indicates that promoting proportionality naturally leads to minimal redundancy, which is a goal of the current diversity approaches.", "Evaluation measures act as objective functions to be optimized by information retrieval systems. Such objective functions must accurately reflect user requirements, particularly when tuning IR systems and learning ranking functions. Ambiguity in queries and redundancy in retrieved documents are poorly reflected by current evaluation measures. In this paper, we present a framework for evaluation that systematically rewards novelty and diversity. We develop this framework into a specific evaluation measure, based on cumulative gain. We demonstrate the feasibility of our approach using a test collection based on the TREC question answering track.", "We study the problem of answering ambiguous web queries in a setting where there exists a taxonomy of information, and that both queries and documents may belong to more than one category according to this taxonomy. We present a systematic approach to diversifying results that aims to minimize the risk of dissatisfaction of the average user. We propose an algorithm that well approximates this objective in general, and is provably optimal for a natural special case. Furthermore, we generalize several classical IR metrics, including NDCG, MRR, and MAP, to explicitly account for the value of diversification. We demonstrate empirically that our algorithm scores higher in these generalized metrics compared to results produced by commercial search engines.", "We present a non-traditional retrieval problem we call subtopic retrieval. The subtopic retrieval problem is concerned with finding documents that cover many different subtopics of a query topic. In such a problem, the utility of a document in a ranking is dependent on other documents in the ranking, violating the assumption of independent relevance which is assumed in most traditional retrieval methods. Subtopic retrieval poses challenges for evaluating performance, as well as for developing effective algorithms. We propose a framework for evaluating subtopic retrieval which generalizes the traditional precision and recall metrics by accounting for intrinsic topic difficulty as well as redundancy in documents. We propose and systematically evaluate several methods for performing subtopic retrieval using statistical language models and a maximal marginal relevance (MMR) ranking strategy. A mixture model combined with query likelihood relevance ranking is shown to modestly outperform a baseline relevance ranking on a data set used in the TREC interactive track.", "Building upon simple models of user needs and behavior, we propose a new measure of novelty and diversity for information retrieval evaluation. We combine ideas from three recently proposed effectiveness measures in an attempt to achieve a balance between the complexity of genuine users needs and the simplicity required for feasible evaluation.", "Given an ambiguous or underspecified query, search result diversification aims at accomodating different user intents within a single \"entry-point\" result page. However, some intents are informational, for which many relevant pages may help, while others are navigational, for which only one web page is required. We propose new evaluation metrics for search result diversification that considers this distinction, as well as a simple method for comparing the intuitiveness of a given pair of metrics quantitatively. Our main experimental findings are: (a) In terms of discriminative power which reflects statistical reliability, the proposed metrics, DIN#-nDCG and P+Q#, are comparable to intent recall and D#-nDCG, and possibly superior to α-nDCG; (b) In terms of preference agreement with intent recall, P+Q# is superior to other diversity metrics and therefore may be the most intuitive as a metric that emphasises diversity; and (c) In terms of preference agreement with effective precision, DIN#-nDCG is superior to other diversity metrics and therefore may be the most intuitive as a metric that emphasises relevance. Moreover, DIN#-nDCG may be the most intuitive as a metric that considers both diversity and relevance. In addition, we demonstrate that the randomised Tukey's Honestly Significant Differences test that takes the entire set of available runs into account is substantially more conservative than the paired bootstrap test that only considers one run pair at a time, and therefore recommend the former approach for significance testing when a set of runs is available for evaluation.", "Search queries are often ambiguous and or underspecified. To accomodate different user needs, search result diversification has received attention in the past few years. Accordingly, several new metrics for evaluating diversification have been proposed, but their properties are little understood. We compare the properties of existing metrics given the premises that (1) queries may have multiple intents; (2) the likelihood of each intent given a query is available; and (3) graded relevance assessments are available for each intent. We compare a wide range of traditional and diversified IR metrics after adding graded relevance assessments to the TREC 2009 Web track diversity task test collection which originally had binary relevance assessments. Our primary criterion is discriminative power, which represents the reliability of a metric in an experiment. Our results show that diversified IR experiments with a given number of topics can be as reliable as traditional IR experiments with the same number of topics, provided that the right metrics are used. Moreover, we compare the intuitiveness of diversified IR metrics by closely examining the actual ranked lists from TREC. We show that a family of metrics called D#-measures have several advantages over other metrics such as α-nDCG and Intent-Aware metrics.", "While numerous metrics for information retrieval are available in the case of binary relevance, there is only one commonly used metric for graded relevance, namely the Discounted Cumulative Gain (DCG). A drawback of DCG is its additive nature and the underlying independence assumption: a document in a given position has always the same gain and discount independently of the documents shown above it. Inspired by the \"cascade\" user model, we present a new editorial metric for graded relevance which overcomes this difficulty and implicitly discounts documents which are shown below very relevant documents. More precisely, this new metric is defined as the expected reciprocal length of time that the user will take to find a relevant document. This can be seen as an extension of the classical reciprocal rank to the graded relevance case and we call this metric Expected Reciprocal Rank (ERR). We conduct an extensive evaluation on the query logs of a commercial search engine and show that ERR correlates better with clicks metrics than other editorial metrics." ] }
1504.04579
1867085891
We present the TyTra-IR, a new intermediate language intended as a compilation target for high-level language compilers and a front-end for HDL code generators. We develop the requirements of this new language based on the design-space of FPGAs that it should be able to express and the estimation-space in which each configuration from the design-space should be mappable in an automated design flow. We use a simple kernel to illustrate multiple configurations using the semantics of TyTra-IR. The key novelty of this work is the cost model for resource-costs and throughput for different configurations of interest for a particular kernel. Through the realistic example of a Successive Over-Relaxation kernel implemented both in TyTra-IR and HDL, we demonstrate both the expressiveness of the IR and the accuracy of our cost model.
High-Level Synthesis for FPGAs is an established technology both in the academia and research. There are two ways of comparing our work with others. If we look at the entire TyTra flow as shown in Figure , then the comparison would be against other C-to-gates tools that can work with legacy code and generate FPGA implementation code from it. As an example, LegUP @cite_4 is an upcoming tool developed in the academia for this purpose. Our own front-end compiler is a work in progress and is not the focus of this paper.
{ "cite_N": [ "@cite_4" ], "mid": [ "2018055497" ], "abstract": [ "In this paper, we introduce a new open source high-level synthesis tool called LegUp that allows software techniques to be used for hardware design. LegUp accepts a standard C program as input and automatically compiles the program to a hybrid architecture containing an FPGA-based MIPS soft processor and custom hardware accelerators that communicate through a standard bus interface. Results show that the tool produces hardware solutions of comparable quality to a commercial high-level synthesis tool." ] }
1504.04579
1867085891
We present the TyTra-IR, a new intermediate language intended as a compilation target for high-level language compilers and a front-end for HDL code generators. We develop the requirements of this new language based on the design-space of FPGAs that it should be able to express and the estimation-space in which each configuration from the design-space should be mappable in an automated design flow. We use a simple kernel to illustrate multiple configurations using the semantics of TyTra-IR. The key novelty of this work is the cost model for resource-costs and throughput for different configurations of interest for a particular kernel. Through the realistic example of a Successive Over-Relaxation kernel implemented both in TyTra-IR and HDL, we demonstrate both the expressiveness of the IR and the accuracy of our cost model.
Altera-OCL is an OpenCL compatible development environment for targeting Altera FPGAs @cite_2 . It offers a familiar development eco-system to programmers already used to programming GPUs and many multi-cores using OpenCL. A comparison of a high-level language like OpenCL with TyTra-IR would come to similar conclusions as arrived in relation to MaxJ. In addition, we feel that the intrinsic parallelism model of OpenCL, which is based on multi-threaded work-items, is not suitable for FPGA targets which offer the best performance via the use of deep, custom pipelines. Altera-OCL is however of considerable importance to our work, as we do not plan to develop our own host-API, or the board-package for dealing with FPGA peripheral functionality. We will wrap our custom HDL inside an OpenCL device abstraction, and will use OpenCL API calls for launching kernels and all host-device interactions.
{ "cite_N": [ "@cite_2" ], "mid": [ "2000921084" ], "abstract": [ "We present an OpenCL compilation framework to generate high-performance hardware for FPGAs. For an OpenCL application comprising a host program and a set of kernels, it compiles the host program, generates Verilog HDL for each kernel, compiles the circuit using Altera Complete Design Suite 12.0, and downloads the compiled design onto an FPGA.We can then run the application by executing the host program on a Windows(tm)-based machine, which communicates with kernels on an FPGA using a PCIe interface. We implement four applications on an Altera Stratix IV and present the throughput and area results for each application. We show that we can achieve a clock frequency in excess of 160MHz on our benchmarks, and that OpenCL computing paradigm is a viable design entry method for high-performance computing applications on FPGAs." ] }
1504.04449
1905286351
We obtain a lower bound on the maximum number of qubits, Qn, e(N), which can be transmitted over n uses of a quantum channel N, for a given non-zero error threshold e. To obtain our result, we first derive a bound on the one-shot entanglement transmission capacity of the channel, and then compute its asymptotic expansion up to the second order. In our method to prove this achievability bound, the decoding map, used by the receiver on the output of the channel, is chosen to be the Petz recovery map (also known as the transpose channel). Our result, in particular, shows that this choice of the decoder can be used to establish the coherent information as an achievable rate for quantum information transmission. Applying our achievability bound to the 50-50 erasure channel (which has zero quantum capacity), we find that there is a sharp error threshold above which Qn, e(N) scales as n.
Our lower bound ) is reminiscent of the second order asymptotic expansion for the maximum number of bits of information which can be transmitted through @math uses of a discrete, memoryless classical channel @math , with an average probability of error of at most @math denoted by @math . Such an expansion was first derived by Strassen in 1962 @cite_29 and refined by Hayashi @cite_2 as well as Polyanskiy, Poor and Verd 'u @cite_35 . It is given by where @math denotes the capacity of the channel (given by Shannon's formula @cite_34 ) and @math is an @math -dependent characteristic of the channel called its @math - dispersion @cite_35 .
{ "cite_N": [ "@cite_35", "@cite_29", "@cite_34", "@cite_2" ], "mid": [ "2106864314", "1621050367", "1995875735", "2104340231" ], "abstract": [ "This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability ? isclosely approximated by C - ?(V n) Q-1(?) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion , and Q is the complementary Gaussian cumulative distribution function.", "Let ω,ϱ be two states of a ∗-algebra and let us consider representations of this algebra R for which ω and ϱ are realized as vector states by vectors x and y. The transition probability P(ω,ϱ) is the spectrum of all the numbers |(x,y)|2 taken over all such realizations. We derive properties of this straightforward generalization of the quantum mechanical transition probability and give, in some important cases, an explicit expression for this quantity.", "In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.", "In this paper, second-order coding rate of channel coding is discussed for general sequence of channels. The optimum second-order transmission rate with a constant error constraint epsiv is obtained by using the information spectrum method. We apply this result to the discrete memoryless case, the discrete memoryless case with a cost constraint, the additive Markovian case, and the Gaussian channel case with an energy constraint. We also clarify that the Gallager bound does not give the optimum evaluation in the second-order coding rate." ] }
1504.04449
1905286351
We obtain a lower bound on the maximum number of qubits, Qn, e(N), which can be transmitted over n uses of a quantum channel N, for a given non-zero error threshold e. To obtain our result, we first derive a bound on the one-shot entanglement transmission capacity of the channel, and then compute its asymptotic expansion up to the second order. In our method to prove this achievability bound, the decoding map, used by the receiver on the output of the channel, is chosen to be the Petz recovery map (also known as the transpose channel). Our result, in particular, shows that this choice of the decoder can be used to establish the coherent information as an achievable rate for quantum information transmission. Applying our achievability bound to the 50-50 erasure channel (which has zero quantum capacity), we find that there is a sharp error threshold above which Qn, e(N) scales as n.
In the last decade there has been a renewal of interest in the evaluation of second order asymptotics for other classical information-theoretic tasks (see e.g. @cite_38 @cite_2 @cite_4 and references therein) and, more recently, even in third-order asymptotics @cite_32 . The study of second order asymptotics in Quantum Information Theory was initiated by Tomamichel and Hayashi @cite_9 and Li @cite_41 . The achievability parts of the second order asymptotics for the tasks studied in @cite_9 @cite_41 were later also obtained in @cite_49 via the collision relative entropy.
{ "cite_N": [ "@cite_38", "@cite_4", "@cite_41", "@cite_9", "@cite_32", "@cite_49", "@cite_2" ], "mid": [ "", "2168800968", "2091240287", "", "1999453492", "2092119860", "2104340231" ], "abstract": [ "", "We treat a random number generation from an i.i.d. probability distribution of P to that of Q. When Q or P is a uniform distribution, the problems have been well-known as the uniform random number generation and the resolvability problem respectively, and analyzed not only in the context of the first order asymptotic theory but also that in the second asymptotic theory. On the other hand, when both P and Q are not a uniform distribution, the second order asymptotics has not been treated. In this paper, we focus on the second order asymptotics of random number generation for arbitrary probability distributions P and Q on a finite set. In particular, we derive the optimal second order generation rate under an arbitrary permissible confidence coefficient.", "In the asymptotic theory of quantum hypothesis testing, the minimal error probability of the first kind jumps sharply from zero to one when the error exponent of the second kind passes by the point of the relative entropy of the two states in an increasing way. This is well known as the direct part and strong converse of quantum Stein's lemma. Here we look into the behavior of this sudden change and have make it clear how the error of first kind grows smoothly according to a lower order of the error exponent of the second kind, and hence we obtain the second-order asymptotics for quantum hypothesis testing. This actually implies quantum Stein's lemma as a special case. Meanwhile, our analysis also yields tight bounds for the case of finite sample size. These results have potential applications in quantum information theory. Our method is elementary, based on basic linear algebra and probability theory. It deals with the achievability part and the optimality part in a unified fashion.", "", "This paper shows new general nonasymptotic achievability and converse bounds and performs their dispersion analysis for the lossy compression problem in which the compressor observes the source through a noisy channel. While this problem is asymptotically equivalent to a noiseless lossy source coding problem with a modified distortion function, nonasymptotically there is a difference in how fast their minimum achievable coding rates approach the rate-distortion function, providing yet another example where at finite blocklengths one must put aside traditional asymptotic thinking.", "In this paper, we provide a simple framework for deriving one-shot achievable bounds for some problems in quantum information theory. Our framework is based on the joint convexity of the exponential of the collision relative entropy and is a (partial) quantum generalization of the technique of from classical information theory. Based on this framework, we derive one-shot achievable bounds for the problems of communication over classical-quantum channels, quantum hypothesis testing, and classical data compression with quantum side information. We argue that our one-shot achievable bounds are strong enough to give the asymptotic achievable rates of these problems even up to the second order.", "In this paper, second-order coding rate of channel coding is discussed for general sequence of channels. The optimum second-order transmission rate with a constant error constraint epsiv is obtained by using the information spectrum method. We apply this result to the discrete memoryless case, the discrete memoryless case with a cost constraint, the additive Markovian case, and the Gaussian channel case with an energy constraint. We also clarify that the Gallager bound does not give the optimum evaluation in the second-order coding rate." ] }
1504.04449
1905286351
We obtain a lower bound on the maximum number of qubits, Qn, e(N), which can be transmitted over n uses of a quantum channel N, for a given non-zero error threshold e. To obtain our result, we first derive a bound on the one-shot entanglement transmission capacity of the channel, and then compute its asymptotic expansion up to the second order. In our method to prove this achievability bound, the decoding map, used by the receiver on the output of the channel, is chosen to be the Petz recovery map (also known as the transpose channel). Our result, in particular, shows that this choice of the decoder can be used to establish the coherent information as an achievable rate for quantum information transmission. Applying our achievability bound to the 50-50 erasure channel (which has zero quantum capacity), we find that there is a sharp error threshold above which Qn, e(N) scales as n.
Our second order achievability bound is similar in form to . Nevertheless, its optimality is open. Note that it follows from the strong converse property of the quantum capacity of generalized dephasing channels @cite_0 that, for such channels, @math is exactly equal to the first order asymptotic rate (and not just a lower bound on it) for any @math . Moreover, from the result of @cite_16 it follows that, for degradable channels, the first order asymptotic rate is given by @math for @math . Our bound has recently been shown to be tight up to second order expansion for all @math for the qubit dephasing channel @cite_14 .
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_16" ], "mid": [ "1622848806", "906791365", "2027468780" ], "abstract": [ "We revisit a fundamental open problem in quantum information theory, namely, whether it is possible to transmit quantum information at a rate exceeding the channel capacity if we allow for a non-vanishing probability of decoding error. Here, we establish that the Rains information of any quantum channel is a strong converse rate for quantum communication. For any sequence of codes with rate exceeding the Rains information of the channel, we show that the fidelity vanishes exponentially fast as the number of channel uses increases. This remains true even if we consider codes that perform classical post-processing on the transmitted quantum data. As an application of this result, for generalized dephasing channels, we show that the Rains information is also achievable, and thereby establish the strong converse property for quantum communication over such channels. Thus, we conclusively settle the strong converse question for a class of quantum channels that have a non-trivial quantum capacity.", "The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances.", "We exhibit a possible road toward a strong converse for the quantum capacity of degradable channels. In particular, we show that all degradable channels obey what we call a “pretty strong” converse: when the code rate increases above the quantum capacity, the fidelity makes a discontinuous jump from 1 to at most 1 √2, asymptotically. A similar result can be shown for the private (classical) capacity. Furthermore, we can show that if the strong converse holds for symmetric channels (which have quantum capacity zero), then degradable channels obey the strong converse. The above-mentioned asymptotic jump of the fidelity at the quantum capacity then decreases from 1 to 0." ] }
1504.04449
1905286351
We obtain a lower bound on the maximum number of qubits, Qn, e(N), which can be transmitted over n uses of a quantum channel N, for a given non-zero error threshold e. To obtain our result, we first derive a bound on the one-shot entanglement transmission capacity of the channel, and then compute its asymptotic expansion up to the second order. In our method to prove this achievability bound, the decoding map, used by the receiver on the output of the channel, is chosen to be the Petz recovery map (also known as the transpose channel). Our result, in particular, shows that this choice of the decoder can be used to establish the coherent information as an achievable rate for quantum information transmission. Applying our achievability bound to the 50-50 erasure channel (which has zero quantum capacity), we find that there is a sharp error threshold above which Qn, e(N) scales as n.
The Petz recovery map (or transpose channel) was introduced by Petz @cite_40 @cite_43 (see also @cite_31 ). @cite_46 it was shown that, if the Petz recovery map is used as the decoding operation, then the average error incurred in sending an ensemble of commuting states through a quantum channel is at most twice the minimum error. Later, this map was also used to characterize so-called quantum Markov chain states @cite_1 . Furthermore, it was used to study quantum error correcting codes in @cite_12 .
{ "cite_N": [ "@cite_31", "@cite_1", "@cite_43", "@cite_40", "@cite_46", "@cite_12" ], "mid": [ "1571989874", "2071849960", "", "2105957088", "1976865215", "2591783339" ], "abstract": [ "I Entropies for Finite Quantum Systems.- 1 Fundamental Concepts.- 2 Postulates for Entropy and Relative Entropy.- 3 Convex Trace Functions.- II Entropies for General Quantum Systems.- 4 Modular Theory and Auxiliaries.- 5 Relative Entropy of States of Operator Algebras.- 6 From Relative Entropy to Entropy.- 7 Functionals of Entropy Type.- III Channeling Transformation and Coarse Graining.- 8 Channels and Their Transpose.- 9 Sufficient Channels and Measurements.- 10 Dynamical Entropy.- 11 Stationary Processes.- IV Perturbation Theory.- 12 Perturbation of States.- 13 Variational Expression of Perturbational Limits.- V Miscellanea.- 14 Central Limit and Quasi-free Reduction.- 15 Thermodynamics of Quantum Spin Systems.- 16 Entropic Uncertainty Relations.- 17 Temperley-Lieb Algebras and Index.- 18 Optical Communication Processes.", "We give an explicit characterisation of the quantum states which saturate the strong subadditivity inequality for the von Neumann entropy. By combining a result of Petz characterising the equality case for the monotonicity of relative entropy with a recent theorem by Koashi and Imoto, we show that such states will have the form of a so–called short quantum Markov chain, which in turn implies that two of the systems are independent conditioned on the third, in a physically meaningful sense. This characterisation simultaneously generalises known necessary and sufficient entropic conditions for quantum error correction as well as the conditions for the achievability of the Holevo bound on accessible information.", "", "A subalgebraM0 of a von Neumann algebraM is called weakly sufficient with respect to a pair (φ,ω) of states if the relative entropy of φ and ω coincides with the relative entropy of their restrictions toM0. The main result says thatM0 is weakly sufficient for (φ,ω) if and only ifM0 contains the Radon-Nikodym cocycle [Dφ,Dω] t . Other conditions are formulated in terms of generalized conditional expectations and the relative Hamiltonian.", "We consider the problem of reversing quantum dynamics, with the goal of preserving an initial state’s quantum entanglement or classical correlation with a reference system. We exhibit an approximate reversal operation, adapted to the initial density operator and the “noise” dynamics to be reversed. We show that its error in preserving either quantum or classical information is no more than twice that of the optimal reversal operation. Applications to quantum algorithms and information transmission are discussed.", "We demonstrate that there exists a universal, near-optimal recovery map—the transpose channel—for approximate quantum error-correcting codes, where optimality is defined using the worst-case fidelity. Using the transpose channel, we provide an alternative interpretation of the standard quantum error correction (QEC) conditions and generalize them to a set of conditions for approximate QEC (AQEC) codes. This forms the basis of a simple algorithm for finding AQEC codes. Our analytical approach is a departure from earlier work relying on exhaustive numerical search for the optimal recovery map, with optimality defined based on entanglement fidelity. For the practically useful case of codes encoding a single qubit of information, our algorithm is particularly easy to implement." ] }
1504.04449
1905286351
We obtain a lower bound on the maximum number of qubits, Qn, e(N), which can be transmitted over n uses of a quantum channel N, for a given non-zero error threshold e. To obtain our result, we first derive a bound on the one-shot entanglement transmission capacity of the channel, and then compute its asymptotic expansion up to the second order. In our method to prove this achievability bound, the decoding map, used by the receiver on the output of the channel, is chosen to be the Petz recovery map (also known as the transpose channel). Our result, in particular, shows that this choice of the decoder can be used to establish the coherent information as an achievable rate for quantum information transmission. Applying our achievability bound to the 50-50 erasure channel (which has zero quantum capacity), we find that there is a sharp error threshold above which Qn, e(N) scales as n.
Our work should be considered as a new step towards understanding the usefulness of the Petz recovery map. In particular it would be interesting to see whether the ideas in our work can be used to show tight achievability bounds for other quantum protocols such as quantum state merging and quantum state redistribution. Another open question in this area is the optimality of the Petz recovery map in the Fawzi-Renner inequality @cite_8 for approximate Markov chain states (see @cite_10 and references therein for a discussion of this question).
{ "cite_N": [ "@cite_10", "@cite_8" ], "mid": [ "2229752136", "1948605327" ], "abstract": [ "The data processing inequality states that the quantum relative entropy between two states ρ and σ can never increase by applying the same quantum channel N to both states. This inequality can be strengthened with a remainder term in the form of a distance between ρ and the closest recovered state (R∘N)(ρ), where R is a recovery map with the property that σ=(R∘N)(σ). We show the existence of an explicit recovery map that is universal in the sense that it depends only on σ and the quantum channel N to be reversed. This result gives an alternate, information-theoretic characterization of the conditions for approximate quantum error correction.", "A state on a tripartite quantum system ( A B C ) forms a Markov chain if it can be reconstructed from its marginal on ( A B ) by a quantum operation from B to ( B C ). We show that the quantum conditional mutual information I(A : C|B) of an arbitrary state is an upper bound on its distance to the closest reconstructed state. It thus quantifies how well the Markov chain property is approximated." ] }
1504.04357
2949623584
In recent years social and news media have increasingly been used to explain patterns in disease activity and progression. Social media data, principally from the Twitter network, has been shown to correlate well with official disease case counts. This fact has been exploited to provide advance warning of outbreak detection, tracking of disease levels and the ability to predict the likelihood of individuals developing symptoms. In this paper we introduce DEFENDER, a software system that integrates data from social and news media and incorporates algorithms for outbreak detection, situational awareness, syndromic case tracking and forecasting. As part of this system we have developed a technique for creating a location network for any country or region based purely on Twitter data. We also present a disease count tracking approach which leverages counts from multiple symptoms, which was found to improve the tracking of diseases by 37 percent over a model that used only previous case data. Finally we attempt to forecast future levels of symptom activity based on observed user movement on Twitter, finding a moderate gain of 5 percent over a time series forecasting model.
One of the most interesting recent studies in this area was carried out by Li & Cardie @cite_23 . Using Twitter data they built a Markov Network able to determine when the flu levels in a US state had reached a breakout'' stage when an epidemic was imminent. Their system takes into account spatial information by building a simplified network map of the US, where each state is a node connected to its neighbouring states. It also takes into account Twitter's daily effect - the fluctuation in the number of tweets posted based on the day of the week.
{ "cite_N": [ "@cite_23" ], "mid": [ "1908960008" ], "abstract": [ "Influenza is an acute respiratory illness that occurs virtually every year and results in substantial disease, death and expense. Detection of Influenza in its earliest stage would facilitate timely action that could reduce the spread of the illness. Existing systems such as CDC and EISS which try to collect diagnosis data, are almost entirely manual, resulting in about two-week delays for clinical data acquisition. Twitter, a popular microblogging service, provides us with a perfect source for early-stage flu detection due to its real- time nature. For example, when a flu breaks out, people that get the flu may post related tweets which enables the detection of the flu breakout promptly. In this paper, we investigate the real-time flu detection problem on Twitter data by proposing Flu Markov Network (Flu-MN): a spatio-temporal unsupervised Bayesian algorithm based on a 4 phase Markov Network, trying to identify the flu breakout at the earliest stage. We test our model on real Twitter datasets from the United States along with baselines in multiple applications, such as real-time flu breakout detection, future epidemic phase prediction, or Influenza-like illness (ILI) physician visits. Experimental results show the robustness and effectiveness of our approach. We build up a real time flu reporting system based on the proposed approach, and we are hopeful that it would help government or health organizations in identifying flu outbreaks and facilitating timely actions to decrease unnecessary mortality." ] }
1504.04357
2949623584
In recent years social and news media have increasingly been used to explain patterns in disease activity and progression. Social media data, principally from the Twitter network, has been shown to correlate well with official disease case counts. This fact has been exploited to provide advance warning of outbreak detection, tracking of disease levels and the ability to predict the likelihood of individuals developing symptoms. In this paper we introduce DEFENDER, a software system that integrates data from social and news media and incorporates algorithms for outbreak detection, situational awareness, syndromic case tracking and forecasting. As part of this system we have developed a technique for creating a location network for any country or region based purely on Twitter data. We also present a disease count tracking approach which leverages counts from multiple symptoms, which was found to improve the tracking of diseases by 37 percent over a model that used only previous case data. Finally we attempt to forecast future levels of symptom activity based on observed user movement on Twitter, finding a moderate gain of 5 percent over a time series forecasting model.
In a further paper @cite_2 the authors built on this work to develop a model allowing them to predict the future level of influenza in US cities by modelling travel patterns of Twitter users. They collected geo-tagged tweets from the 100 busiest commercial airports in the US, and used their classifier to identify symptomatic individuals. They then built up a model of the travel patterns between airports by identifying users who had tweeted from multiple locations on subsequent days. Using this data they found that the most important factor in predicting the prevalence of flu in a given city was the number of symptomatic passengers that had flown into the city over the previous seven days.
{ "cite_N": [ "@cite_2" ], "mid": [ "1528377600" ], "abstract": [ "Researchers have begun to mine social network data in order to predict a variety of social, economic, and health related phenomena. While previous work has focused on predicting aggregate properties, such as the prevalence of seasonal influenza in a given country, we consider the task of fine-grained prediction of the health of specific people from noisy and incomplete data. We construct a probabilistic model that can predict if and when an individual will fall ill with high precision and good recall on the basis of his social ties and co-locations with other people, as revealed by their Twitter posts. Our model is highly scalable and can be used to predict general dynamic properties of individuals in large realworld social networks. These results provide a foundation for research on fundamental questions of public health, including the identification of non-cooperative disease carriers (\"Typhoid Marys\"), adaptive vaccination policies, and our understanding of the emergence of global epidemics from day-today interpersonal interactions." ] }
1504.04090
777557577
We propose Ordered Subspace Clustering (OSC) to segment data drawn from a sequentially ordered union of subspaces. Similar to Sparse Subspace Clustering (SSC) we formulate the problem as one of finding a sparse representation but include an additional penalty term to take care of sequential data. We test our method on data drawn from infrared hyper spectral, video and motion capture data. Experiments show that our method, OSC, outperforms the state of the art methods: Spatial Subspace Clustering (SpatSC), Low-Rank Representation (LRR) and SSC.
To learn the subspace structure of the data, spectral subspace segmentation methods exploit the the self expressive property @cite_6 :
{ "cite_N": [ "@cite_6" ], "mid": [ "1993962865" ], "abstract": [ "Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering." ] }
1504.04090
777557577
We propose Ordered Subspace Clustering (OSC) to segment data drawn from a sequentially ordered union of subspaces. Similar to Sparse Subspace Clustering (SSC) we formulate the problem as one of finding a sparse representation but include an additional penalty term to take care of sequential data. We test our method on data drawn from infrared hyper spectral, video and motion capture data. Experiments show that our method, OSC, outperforms the state of the art methods: Spatial Subspace Clustering (SpatSC), Low-Rank Representation (LRR) and SSC.
After learning @math the next step is to assign each data point a subspace label. The first step in this process is to build a symmetric affinity matrix. The affinity matrix is usually defined as where element @math of @math is interpreted as the affinity or similarity between data points @math and @math . Next this affinity matrix is used by a spectral clustering method for final segmentation. Normalised Cuts (NCut) @cite_31 is the de facto spectral clustering method for this task @cite_6 @cite_7 .
{ "cite_N": [ "@cite_31", "@cite_7", "@cite_6" ], "mid": [ "2121947440", "", "1993962865" ], "abstract": [ "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.", "", "Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering." ] }
1504.04090
777557577
We propose Ordered Subspace Clustering (OSC) to segment data drawn from a sequentially ordered union of subspaces. Similar to Sparse Subspace Clustering (SSC) we formulate the problem as one of finding a sparse representation but include an additional penalty term to take care of sequential data. We test our method on data drawn from infrared hyper spectral, video and motion capture data. Experiments show that our method, OSC, outperforms the state of the art methods: Spatial Subspace Clustering (SpatSC), Low-Rank Representation (LRR) and SSC.
Recent work by Soltanolkotabi, Elhamifar and Candes @cite_10 showed that under rather broad conditions using noisy data @math the @math approach should produce accurate clustering results. These conditions include maximum signal-to-noise ratio, number of samples in each cluster and distance between subspaces and appropriate selection of parameters. They use the following relaxed objective with regularisation parameter @math tuned for each data sample.
{ "cite_N": [ "@cite_10" ], "mid": [ "2152461258" ], "abstract": [ "Subspace clustering refers to the task of nding a multi-subspace representation that best ts a collection of points taken from a high-dimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) (25) to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. Synthetic as well as real data experiments complement our theoretical study, illustrating our approach and demonstrating its eectiveness." ] }
1504.04090
777557577
We propose Ordered Subspace Clustering (OSC) to segment data drawn from a sequentially ordered union of subspaces. Similar to Sparse Subspace Clustering (SSC) we formulate the problem as one of finding a sparse representation but include an additional penalty term to take care of sequential data. We test our method on data drawn from infrared hyper spectral, video and motion capture data. Experiments show that our method, OSC, outperforms the state of the art methods: Spatial Subspace Clustering (SpatSC), Low-Rank Representation (LRR) and SSC.
Rather than compute the sparsest representation of each data point individually, Low-Rank Representation (LRR) by Liu, Lin and Yu @cite_39 attempts to incorporate global structure of the data by computing the lowest-rank representation of the set of data points. Therefore the objective becomes This means that not only can the data points be decomposed as a linear combination of other points but the entire coefficient matrix should be low-rank. The aim of the rank penalty is to create a global grouping effect that reflects the underlying subspace structure of the data. In other words, data points belonging to the same subspace should have similar coefficient patterns.
{ "cite_N": [ "@cite_39" ], "mid": [ "79405465" ], "abstract": [ "We propose low-rank representation (LRR) to segment data drawn from a union of multiple linear (or affine) subspaces. Given a set of data vectors, LRR seeks the lowest-rank representation among all the candidates that represent all vectors as the linear combination of the bases in a dictionary. Unlike the well-known sparse representation (SR), which computes the sparsest representation of each data vector individually, LRR aims at finding the lowest-rank representation of a collection of vectors jointly. LRR better captures the global structure of data, giving a more effective tool for robust subspace segmentation from corrupted data. Both theoretical and experimental results show that LRR is a promising tool for subspace segmentation." ] }
1504.04090
777557577
We propose Ordered Subspace Clustering (OSC) to segment data drawn from a sequentially ordered union of subspaces. Similar to Sparse Subspace Clustering (SSC) we formulate the problem as one of finding a sparse representation but include an additional penalty term to take care of sequential data. We test our method on data drawn from infrared hyper spectral, video and motion capture data. Experiments show that our method, OSC, outperforms the state of the art methods: Spatial Subspace Clustering (SpatSC), Low-Rank Representation (LRR) and SSC.
Spatial Subspace Clustering (SpatSC) @cite_2 extended SSC by incorporating a sequential @math neighbour penalty where @math is a lower triangular matrix with @math on the diagonal and @math on the second lower diagonal: ]. Therefore @math . The aim of this formulation is to force consecutive columns of @math to be similar.
{ "cite_N": [ "@cite_2" ], "mid": [ "2110726819" ], "abstract": [ "A method called spatial subspace clustering (SpatSC) is proposed for the hyperspectral data segmentation problem focusing on the hyperspectral data taken from a drill hole, which can be seen as one-dimensional image data compared with hyperspectral multispectral image data. Addressing this problem has several practical uses, such as improving interpretability of the data, and, especially, obtaining a better understanding of the mineralogy. SpatSC is a combination of subspace learning and the fused least absolute shrinkage and selection operator. As a result, it is able to produce spatially smooth clusters. From this point of view, it can be simply interpreted as a spatial information guided subspace learning algorithm. SpatSC has flexible structures that embrace the cases with and without library of pure spectra. It can be further extended, for example, using different error structures, such as including rank operator. We test this method on both simulated data and real-world hyperspectral data. SpatSC produces stable and continuous segments, which are more interpretable than those obtained from other state-of-the-art subspace learning algorithms." ] }
1504.03711
2952066227
Mobile apps can access a wide variety of secure information, such as contacts and location. However, current mobile platforms include only coarse access control mechanisms to protect such data. In this paper, we introduce interaction-based declassification policies, in which the user's interactions with the app constrain the release of sensitive information. Our policies are defined extensionally, so as to be independent of the app's implementation, based on sequences of security-relevant events that occur in app runs. Policies use LTL formulae to precisely specify which secret inputs, read at which times, may be released. We formalize a semantic security condition, interaction-based noninterference, to define our policies precisely. Finally, we describe a prototype tool that uses symbolic execution to check interaction-based declassification policies for Android, and we show that it enforces policies correctly on a set of apps.
TaintDroid @cite_23 is a run-time information-flow tracking system for Android. It monitors the usage of sensitive information and detects when that information is sent over insecure channels. Unlike , TaintDroid does not detect implicit flows.
{ "cite_N": [ "@cite_23" ], "mid": [ "1963971515" ], "abstract": [ "Today's smartphone operating systems frequently fail to provide users with adequate control over and visibility into how third-party applications use their privacy-sensitive data. We address these shortcomings with TaintDroid, an efficient, systemwide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid provides real-time analysis by leveraging Android's virtualized execution environment. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, we found 68 instances of misappropriation of users' location and device identification information across 20 applications. Monitoring sensitive data with TaintDroid provides informed use of third-party applications for phone users and valuable input for smartphone security service firms seeking to identify misbehaving applications." ] }
1504.03711
2952066227
Mobile apps can access a wide variety of secure information, such as contacts and location. However, current mobile platforms include only coarse access control mechanisms to protect such data. In this paper, we introduce interaction-based declassification policies, in which the user's interactions with the app constrain the release of sensitive information. Our policies are defined extensionally, so as to be independent of the app's implementation, based on sequences of security-relevant events that occur in app runs. Policies use LTL formulae to precisely specify which secret inputs, read at which times, may be released. We formalize a semantic security condition, interaction-based noninterference, to define our policies precisely. Finally, we describe a prototype tool that uses symbolic execution to check interaction-based declassification policies for Android, and we show that it enforces policies correctly on a set of apps.
AppIntent @cite_7 uses symbolic execution to derive the , meaning inputs and GUI interactions, that causes sensitive information to be released in an Android app. A human analyst examines that context and makes an expert judgment as to whether the release is a security violation. instead uses human-written LTL formulae to specify whether declassifications are permitted. It is unclear from @cite_7 whether AppIntent detects implicit flows.
{ "cite_N": [ "@cite_7" ], "mid": [ "2085577046" ], "abstract": [ "Android phones often carry personal information, attracting malicious developers to embed code in Android applications to steal sensitive data. With known techniques in the literature, one may easily determine if sensitive data is being transmitted out of an Android phone. However, transmission of sensitive data in itself does not necessarily indicate privacy leakage; a better indicator may be whether the transmission is by user intention or not. When transmission is not intended by the user, it is more likely a privacy leakage. The problem is how to determine if transmission is user intended. As a first solution in this space, we present a new analysis framework called AppIntent. For each data transmission, AppIntent can efficiently provide a sequence of GUI manipulations corresponding to the sequence of events that lead to the data transmission, thus helping an analyst to determine if the data transmission is user intended or not. The basic idea is to use symbolic execution to generate the aforementioned event sequence, but straightforward symbolic execution proves to be too time-consuming to be practical. A major innovation in AppIntent is to leverage the unique Android execution model to reduce the search space without sacrificing code coverage. We also present an evaluation of AppIntent with a set of 750 malicious apps, as well as 1,000 top free apps from Google Play. The results show that AppIntent can effectively help separate the apps that truly leak user privacy from those that do not." ] }
1504.03711
2952066227
Mobile apps can access a wide variety of secure information, such as contacts and location. However, current mobile platforms include only coarse access control mechanisms to protect such data. In this paper, we introduce interaction-based declassification policies, in which the user's interactions with the app constrain the release of sensitive information. Our policies are defined extensionally, so as to be independent of the app's implementation, based on sequences of security-relevant events that occur in app runs. Policies use LTL formulae to precisely specify which secret inputs, read at which times, may be released. We formalize a semantic security condition, interaction-based noninterference, to define our policies precisely. Finally, we describe a prototype tool that uses symbolic execution to check interaction-based declassification policies for Android, and we show that it enforces policies correctly on a set of apps.
Pegasus @cite_16 combines static analysis, model checking, and run-time monitoring to check whether an app uses API calls and privileges consistently with users' expectations. Those expectations are expressed using LTL formulae, similarly to . Pegasus synthesizes a kind of automaton called a from the app's bytecode then checks whether that automaton is a model for the formulae. Unlike , Pegasus does not address information flow.
{ "cite_N": [ "@cite_16" ], "mid": [ "2187373861" ], "abstract": [ "The difference between a malicious and a benign Android application can often be characterised by context and sequence in which certain permissions and APIs are used. We present a new technique for checking temporal properties of the interaction between an application and the Android event system. Our tool can automatically detect sensitive operations being performed without the user’s consent, such as recording audio after the stop button is pressed, or accessing an address book in the background. Our work centres around a new abstraction of Android applications, called a Permission Event Graph, which we construct with static analysis, and query using model checking. We evaluate application-independent properties on 152 malicious and 117 benign applications, and application-specific properties on 8 benign and 9 malicious applications. In both cases, we can detect, or prove the absence of malicious behaviour beyond the reach of existing techniques." ] }
1504.04049
1511263746
We consider distributed optimization where @math nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration @math , performs an update (is active) with probability @math , and stays idle (is inactive) with probability @math . Whenever active, each node performs an update by weight-averaging its solution estimate with the estimates of its active neighbors, taking a negative gradient step with respect to its local cost, and performing a projection onto the constraint set; inactive nodes perform no updates. Assuming that nodes' local costs are strongly convex, with Lipschitz continuous gradients, we show that, as long as activation probability @math grows to one asymptotically, our algorithm converges in the mean square sense (MSS) to the same solution as the standard distributed gradient method, i.e., as if all the nodes were active at all iterations. Moreover, when @math grows to one linearly, with an appropriately set convergence factor, the algorithm has a linear MSS convergence, with practically the same factor as the standard distributed gradient method. Simulations on both synthetic and real world data sets demonstrate that, when compared with the standard distributed gradient method, the proposed algorithm significantly reduces the overall number of per-node communications and per-node gradient evaluations (computational cost) for the same required accuracy.
. Distributed methods of this type date back at least to the 80s, e.g., @cite_52 , and have received renewed interest in the past decade, e.g., @cite_18 . Reference @cite_18 proposes the distributed (sub)gradient method with a constant step-size, and analyzes its performance under time-varying communication networks. Reference @cite_50 considers distributed (sub)gradient methods under random communication networks with failing links and establishes almost sure convergence under a diminishing step-size rule. A major difference of our paper from the above works is that, in @cite_52 @cite_18 @cite_50 , only inter-node communications over iterations are sparsified,'' while each node performs gradient evaluations at each iteration @math . @cite_32 , the authors propose a gossip-like scheme where, at each @math , only two neighboring nodes in the network wake up and perform weight-averaging (communication among them) and a negative gradient step with respect to their respective local costs, while the remaining nodes stay idle. The key difference with respect to our paper is that, with our method, the number of active nodes over iterations @math (on average) is increasing, while in @cite_32 it remains equal to two for all @math . Consequently, the established convergence properties of the two methods are very different.
{ "cite_N": [ "@cite_18", "@cite_32", "@cite_52", "@cite_50" ], "mid": [ "2044212084", "2143649445", "2154834860", "1556217901" ], "abstract": [ "We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.", "We consider a distributed multi-agent network system where the goal is to minimize an objective function that can be written as the sum of component functions, each of which is known (with stochastic errors) to a specific network agent. We propose an asynchronous algorithm that is motivated by a random gossip scheme where each agent has a local Poisson clock. At each tick of its local clock, the agent averages its estimate with a randomly chosen neighbor and adjusts the average using the gradient of its local function that is computed with stochastic errors.We investigate the convergence properties of the algorithm for two different classes of functions: differentiable but not necessarily convex and convex but not necessarily differentiable.", "We present a model for asynchronous distributed computation and then proceed to analyze the convergence of natural asynchronous distributed versions of a large class of deterministic and stochastic gradient-like algorithms. We show that such algorithms retain the desirable convergence properties of their centralized counterparts, provided that the time between consecutive interprocessor communications and the communication delays are not too large.", "We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works that make worst-case assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide convergence results and convergence rate estimates for our subgradient algorithm." ] }
1504.04049
1511263746
We consider distributed optimization where @math nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration @math , performs an update (is active) with probability @math , and stays idle (is inactive) with probability @math . Whenever active, each node performs an update by weight-averaging its solution estimate with the estimates of its active neighbors, taking a negative gradient step with respect to its local cost, and performing a projection onto the constraint set; inactive nodes perform no updates. Assuming that nodes' local costs are strongly convex, with Lipschitz continuous gradients, we show that, as long as activation probability @math grows to one asymptotically, our algorithm converges in the mean square sense (MSS) to the same solution as the standard distributed gradient method, i.e., as if all the nodes were active at all iterations. Moreover, when @math grows to one linearly, with an appropriately set convergence factor, the algorithm has a linear MSS convergence, with practically the same factor as the standard distributed gradient method. Simulations on both synthetic and real world data sets demonstrate that, when compared with the standard distributed gradient method, the proposed algorithm significantly reduces the overall number of per-node communications and per-node gradient evaluations (computational cost) for the same required accuracy.
There have been many works where nodes or links in the network are controlled by random variables. References @cite_39 @cite_26 @cite_8 @cite_27 consider distributed algorithms for solving the consensus problem -- finding an average of nodes' local scalars @math 's, while we consider here a more general problem . These consensus algorithms involve only local averaging steps, and no local gradient steps are present (while we have here both local averaging and local gradient steps). The models of averaging (weight) matrices which @cite_39 @cite_26 @cite_8 @cite_27 assume are very different from ours: they all assume random weight matrices with time-invariant distributions, while ours are time-varying. Reference @cite_29 studies diffusion algorithms under changing topologies and data-normalized algorithms, under general, non-Gaussian distributions. Reference @cite_30 proposes a control mechanism for link activations in diffusion algorithms to minimize the estimation error under given resource constraints. The main differences with respect to our paper are that @cite_30 assumes that local gradients are always incorporated (deterministic step-sizes), and the link activation probabilities are time invariant.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_8", "@cite_29", "@cite_39", "@cite_27" ], "mid": [ "2102751833", "2151135252", "2161305486", "2142737631", "2145706050", "2111857756" ], "abstract": [ "This paper presents an efficient link probability control strategy for distributed estimation problems over probabilistic diffusion networks where the mean usage of communication resources is restricted. The proposed algorithm controls link probabilities so that the estimation error is minimized under given resource constraints. Simulation results show that the probabilistic diffusion least-mean squares (LMS) algorithm with the proposed probability control not only outperforms those with static probabilities but also reduces the amount of communications among nodes.", "Reaching consensus in a network is an important problem in control, estimation, and resource allocation. While many algorithms focus on computing the exact average of the initial values in the network, in some cases it is more important for nodes to reach a consensus quickly. In a distributed system establishing two-way communication may also be difficult or unreliable. In this paper, the effect of the wireless medium on simple consensus protocol is explored. In a wireless environment, a node's transmission is a broadcast to all nodes which can hear it, and due to signal propagation effects, the neighborhood size may change with time. A class of non-sum preserving algorithms involving unidirectional broadcasting is extended to a time-varying connection model. This algorithm converges almost surely and its expected consensus value is the true average. A simple bound is given on the convergence time.", "Motivated by applications to wireless sensor, peer-to-peer, and ad hoc networks, we study distributed broadcasting algorithms for exchanging information and computing in an arbitrarily connected network of nodes. Specifically, we study a broadcasting-based gossiping algorithm to compute the (possibly weighted) average of the initial measurements of the nodes at every node in the network. We show that the broadcast gossip algorithm converges almost surely to a consensus. We prove that the random consensus value is, in expectation, the average of initial node measurements and that it can be made arbitrarily close to this value in mean squared error sense, under a balanced connectivity model and by trading off convergence speed with accuracy of the computation. We provide theoretical and numerical results on the mean square error performance, on the convergence rate and study the effect of the ldquomixing parameterrdquo on the convergence rate of the broadcast gossip algorithm. The results indicate that the mean squared error strictly decreases through iterations until the consensus is achieved. Finally, we assess and compare the communication cost of the broadcast gossip algorithm to achieve a given distance to consensus through theoretical and numerical results.", "Adaptive networks (AN) have been recently proposed to address distributed estimation problems [1]-[4]. Here we extend prior work to changing topologies and data-normalized algorithms. The resulting framework may also treat signals with general distributions, rather than Gaussian, provided that certain data statistical moments are known. A byproduct of this formulation is a probabilistic diffusion adaptive network: a simpler yet robust variant of the standard diffusion algorithm [2].", "In a sensor network, in practice, the communication among sensors is subject to: 1) errors that can cause failures of links among sensors at random times; 2) costs; and 3) constraints, such as power, data rate, or communication, since sensors and networks operate under scarce resources. The paper studies the problem of designing the topology, i.e., assigning the probabilities of reliable communication among sensors (or of link failures) to maximize the rate of convergence of average consensus, when the link communication costs are taken into account, and there is an overall communication budget constraint. We model the network as a Bernoulli random topology and establish necessary and sufficient conditions for mean square sense (mss) and almost sure (a.s.) convergence of average consensus when network links fail. In particular, a necessary and sufficient condition is for the algebraic connectivity of the mean graph topology to be strictly positive. With these results, we show that the topology design with random link failures, link communication costs, and a communication cost constraint is a constrained convex optimization problem that can be efficiently solved for large networks by semidefinite programming techniques. Simulations demonstrate that the optimal design improves significantly the convergence speed of the consensus algorithm and can achieve the performance of a non-random network at a fraction of the communication cost.", "The paper studies the problem of distributed average consensus in sensor networks with quantized data and random link failures. To achieve consensus, dither (small noise) is added to the sensor states before quantization. When the quantizer range is unbounded (countable number of quantizer levels), stochastic approximation shows that consensus is asymptotically achieved with probability one and in mean square to a finite random variable. We show that the mean-squared error (mse) can be made arbitrarily small by tuning the link weight sequence, at a cost of the convergence rate of the algorithm. To study dithered consensus with random links when the range of the quantizer is bounded, we establish uniform boundedness of the sample paths of the unbounded quantizer. This requires characterization of the statistical properties of the supremum taken over the sample paths of the state of the quantizer. This is accomplished by splitting the state vector of the quantizer in two components: one along the consensus subspace and the other along the subspace orthogonal to the consensus subspace. The proofs use maximal inequalities for submartingale and supermartingale sequences. From these, we derive probability bounds on the excursions of the two subsequences, from which probability bounds on the excursions of the quantizer state vector follow. The paper shows how to use these probability bounds to design the quantizer parameters and to explore tradeoffs among the number of quantizer levels, the size of the quantization steps, the desired probability of saturation, and the desired level of accuracy ? away from consensus. Finally, the paper illustrates the quantizer design with a numerical study." ] }
1504.04049
1511263746
We consider distributed optimization where @math nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration @math , performs an update (is active) with probability @math , and stays idle (is inactive) with probability @math . Whenever active, each node performs an update by weight-averaging its solution estimate with the estimates of its active neighbors, taking a negative gradient step with respect to its local cost, and performing a projection onto the constraint set; inactive nodes perform no updates. Assuming that nodes' local costs are strongly convex, with Lipschitz continuous gradients, we show that, as long as activation probability @math grows to one asymptotically, our algorithm converges in the mean square sense (MSS) to the same solution as the standard distributed gradient method, i.e., as if all the nodes were active at all iterations. Moreover, when @math grows to one linearly, with an appropriately set convergence factor, the algorithm has a linear MSS convergence, with practically the same factor as the standard distributed gradient method. Simulations on both synthetic and real world data sets demonstrate that, when compared with the standard distributed gradient method, the proposed algorithm significantly reduces the overall number of per-node communications and per-node gradient evaluations (computational cost) for the same required accuracy.
References @cite_43 @cite_31 @cite_2 provide a thorough and in-depth analysis of diffusion algorithms under a very general model of asynchrony, where both the combination (weight) matrices and nodes' step-sizes are random. Our work differs from these references in several aspects, which include the following. A major difference is that papers @cite_43 @cite_31 @cite_2 assume that both the step sizes'and the combination matrices' random processes have constant (time-invariant) first and second moments, and the two processes are moreover mutually independent. In contrast, both our weight matrices and step-sizes have time-varying distributions. Actually, the fact that, with our method, node activation probabilities converge to one (which corresponds to the time-varying first moment of the step-sizes) is critical to establish our main results (Theorems 2 and 3). Further differences are that papers @cite_43 @cite_31 @cite_2 allow for noisy gradients, their nodes' local cost functions all have the same minimizers, and therein the optimization problem is unconstrained. In contrast, we assume noise-free gradients, different local minimizers, and constrained problems.
{ "cite_N": [ "@cite_43", "@cite_31", "@cite_2" ], "mid": [ "2963395095", "", "2963866791" ], "abstract": [ "In this work and the supporting Parts II and III of this paper, also in the current issue, we provide a rather detailed analysis of the stability and performance of asynchronous strategies for solving distributed optimization and adaptation problems over networks. We examine asynchronous networks that are subject to fairly general sources of uncertainties, such as changing topologies, random link failures, random data arrival times, and agents turning on and off randomly. Under this model, agents in the network may stop updating their solutions or may stop sending or receiving information in a random manner and without coordination with other agents. We establish in Part I conditions on the first and second-order moments of the relevant parameter distributions to ensure mean-square stable behavior. We derive in Part II expressions that reveal how the various parameters of the asynchronous behavior influence network performance. We compare in Part III the performance of asynchronous networks to the performance of both centralized solutions and synchronous networks. One notable conclusion is that the mean-square-error performance of asynchronous networks shows a degradation only in the order of O(ν), where ν is a small step-size parameter, while the convergence rate remains largely unaltered. The results provide a solid justification for the remarkable resilience of cooperative networks in the face of random failures at multiple levels: agents, links, data arrivals, and topology.", "", "In Part II of this paper, also in this issue, we carried out a detailed mean-square-error analysis of the performance of asynchronous adaptation and learning over networks under a fairly general model for asynchronous events including random topologies, random link failures, random data arrival times, and agents turning on and off randomly. In this Part III, we compare the performance of synchronous and asynchronous networks. We also compare the performance of decentralized adaptation against centralized stochastic-gradient (batch) solutions. Two interesting conclusions stand out. First, the results establish that the performance of adaptive networks is largely immune to the effect of asynchronous events: the mean and mean-square convergence rates and the asymptotic bias values are not degraded relative to synchronous or centralized implementations. Only the steady-state mean-square-deviation suffers a degradation in the order of ν, which represents the small step-size parameters used for adaptation. Second, the results show that the adaptive distributed network matches the performance of the centralized solution. These conclusions highlight another critical benefit of cooperation by networked agents: cooperation does not only enhance performance in comparison to stand-alone single-agent processing, but it also endows the network with remarkable resilience to various forms of random failure events and is able to deliver performance that is as powerful as batch solutions." ] }
1504.04049
1511263746
We consider distributed optimization where @math nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration @math , performs an update (is active) with probability @math , and stays idle (is inactive) with probability @math . Whenever active, each node performs an update by weight-averaging its solution estimate with the estimates of its active neighbors, taking a negative gradient step with respect to its local cost, and performing a projection onto the constraint set; inactive nodes perform no updates. Assuming that nodes' local costs are strongly convex, with Lipschitz continuous gradients, we show that, as long as activation probability @math grows to one asymptotically, our algorithm converges in the mean square sense (MSS) to the same solution as the standard distributed gradient method, i.e., as if all the nodes were active at all iterations. Moreover, when @math grows to one linearly, with an appropriately set convergence factor, the algorithm has a linear MSS convergence, with practically the same factor as the standard distributed gradient method. Simulations on both synthetic and real world data sets demonstrate that, when compared with the standard distributed gradient method, the proposed algorithm significantly reduces the overall number of per-node communications and per-node gradient evaluations (computational cost) for the same required accuracy.
Our paper is also related to reference @cite_42 , which considers diffusion algorithms with two types of nodes -- informed and uninformed. The informed nodes both: 1) acquire measurements and perform in-network processing (which translates into computing gradients in our scenario); and 2) perform consultation with neighbors (which translates into weight-averaging the estimates across neighborhoods), while the uninformed nodes only perform the latter task. The authors study the effect of the proportion of informed nodes and their distribution in space. A key difference with respect to our work is that the uninformed nodes in @cite_42 still perform weight-averaging, while the idle nodes here perform no processing. Finally, we comment on reference @cite_38 which introduces an adaptive policy for each node to decide whether it will communicate with its neighbors or not and demonstrate significant savings in communications with respect to the always-communicating scenario. A major difference of @cite_38 from our paper is that, with @cite_38 , nodes always perform local gradients, i.e., they do not stay idle (in the sense defined here).
{ "cite_N": [ "@cite_38", "@cite_42" ], "mid": [ "2541688248", "2112827165" ], "abstract": [ "Methods for distributed optimization are necessary to solve large-scale problems such as those becoming more common in machine learning. The communication cost associated with transmitting large messages can become a serious performance bottleneck. We propose a consensus-based distributed algorithm to minimize a convex separable objective. Each node holds one component of the objective function, and the nodes alternate between a computation phase, where local gradient steps are performed based on the local objective, and a communication phase, where consensus steps are performed to bring the local states into agreement. The nodes use local decision rules to adaptively determine when communication is not necessary. This results in significantly lower communication costs and allows a user to tradeoff the amount of communication with the accuracy of the final output. Experiments on a cluster using simulated and real datasets illustrate the tradeoff.", "Adaptive networks consist of a collection of agents with adaptation and learning abilities. The agents interact with each other on a local level and diffuse information across the network through their collaboration. In this work, we consider two types of agents: informed agents and uninformed agents. The former receive new data regularly and perform consultation and in-network processing, while the latter do not collect data and only participate in the consultation tasks. We examine the performance of LMS diffusion strategies for distributed estimation over networks as a function of the proportion of informed agents and their distribution in space. The results reveal some interesting trade-offs between convergence rate and mean-square performance. In particular, among other results, it is shown that the mean-square performance of adaptive networks does not necessarily improve with a larger proportion of informed agents. Instead, it is established that if the set of informed agents is enlarged, the convergence rate of the network becomes faster albeit at the expense of some deterioration in mean-square performance. The results further establish that uninformed agents play an important role in determining the steady-state performance of the network and that it is preferable to keep some of the highly noisy or highly connected agents uninformed. The arguments reveal an important interplay among three factors: the number and distribution of informed agents in the network, the convergence rate of the learning process, and the estimation accuracy in steady-state. Expressions that quantify these relations are derived, and simulations are included to support the theoretical findings. We illustrate application of the results to two network models, namely, the Erdos-Renyi and scale-free models." ] }
1504.04049
1511263746
We consider distributed optimization where @math nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration @math , performs an update (is active) with probability @math , and stays idle (is inactive) with probability @math . Whenever active, each node performs an update by weight-averaging its solution estimate with the estimates of its active neighbors, taking a negative gradient step with respect to its local cost, and performing a projection onto the constraint set; inactive nodes perform no updates. Assuming that nodes' local costs are strongly convex, with Lipschitz continuous gradients, we show that, as long as activation probability @math grows to one asymptotically, our algorithm converges in the mean square sense (MSS) to the same solution as the standard distributed gradient method, i.e., as if all the nodes were active at all iterations. Moreover, when @math grows to one linearly, with an appropriately set convergence factor, the algorithm has a linear MSS convergence, with practically the same factor as the standard distributed gradient method. Simulations on both synthetic and real world data sets demonstrate that, when compared with the standard distributed gradient method, the proposed algorithm significantly reduces the overall number of per-node communications and per-node gradient evaluations (computational cost) for the same required accuracy.
have been studied for a long time. We distinguish two types of methods: the ones that assume unbounded sample sizes (where the cost function is in the form of a mathematical expectation) and the methods with bounded sample sizes (where the cost function is of the form in .) Our work contrasts with both of these threads of works by considering distributed optimization over an arbitrary connected network, while they consider centralized methods. Unbounded sample sizes have been studied, e.g., in @cite_3 @cite_14 @cite_48 @cite_28 @cite_40 . Reference @cite_3 uses a Bayesian scheme to determine the sample size at each iteration within the trust region framework, and it shows almost sure convergence to a problem solution. Reference @cite_14 shows almost sure convergence as long as the sample size grows sufficiently fast along iterations. @cite_48 , the variable sample size strategy is obtained as the solution of an associated auxiliary optimization problem. Further references on careful analyses of the increasing sample sizes are, e.g., @cite_28 @cite_40 .
{ "cite_N": [ "@cite_14", "@cite_28", "@cite_48", "@cite_3", "@cite_40" ], "mid": [ "2137385155", "2155559091", "2041159855", "2045192889", "2026272724" ], "abstract": [ "In this article we discuss the application of a certain class of Monte Carlo methods to stochastic optimization problems. Particularly, we study variable-sample techniques, in which the objective function is replaced, at each iteration, by a sample average approximation. We first provide general results on the schedule of sample sizes, under which variable-sample methods yield consistent estimators as well as bounds on the estimation error. Because the convergence analysis is performed pathwisely, we are able to obtain our results in a flexible setting, which requires mild assumptions on the distributions and which includes the possibility of using different sampling distributions along the algorithm. We illustrate these ideas by studying a modification of the well-known pure random search method, adapting it to the variable-sample scheme, and show conditions for convergence of the algorithm. Implementation issues are discussed and numerical results are presented to illustrate the ideas.", "The Simulation-Optimization (SO) problem is a constrained optimization problem where the objective function is observed with error, usually through an oracle such as a simulation. Retrospective Approximation (RA) is a general technique that can be used to solve SO problems. In RA, the solution to the SO problem is approached using solutions to a sequence of approximate problems, each of which is generated using a specified sample size and solved to a specified error tolerance. In this paper, our focus is parameter choice in RA algorithms, where the term parameter is broadly interpreted. Specifically, we present (i) conditions that guarantee convergence of estimated solutions to the true solution; (ii) convergence properties of the sample-size and error-tolerance sequences that ensure that the sequence of estimated solutions converge to the true solution in an optimal fashion; and (iii) a numerical procedure that efficiently solves the generated approximate problems for one-dimensional SO.", "We consider a class of stochastic nonlinear programs for which an approximation to a locally optimal solution is specified in terms of a fractional reduction of the initial cost error. We show that such an approximate solution can be found by approximately solving a sequence of sample average approximations. The key issue in this approach is the determination of the required sequence of sample average approximations as well as the number of iterations to be carried out on each sample average approximation in this sequence. We show that one can express this requirement as an idealized optimization problem whose cost function is the computing work required to obtain the required error reduction. The specification of this idealized optimization problem requires the exact knowledge of a few problems and algorithm parameters. Since the exact values of these parameters are not known, we use estimates, which can be updated as the computation progresses. We illustrate our approach using two numerical examples from structural engineering design.", "The sample-path method is one of the most important tools in simulation-based optimization. The basic idea of the method is to approximate the expected simulation output by the average of sample observations with a common random number sequence. In this paper, we describe a new variant of Powell’s unconstrained optimization by quadratic approximation (UOBYQA) method, which integrates a Bayesian variable-number sample-path (VNSP) scheme to choose appropriate number of samples at each iteration. The statistically accurate scheme determines the number of simulation runs, and guarantees the global convergence of the algorithm. The VNSP scheme saves a significant amount of simulation operations compared to general purpose ‘fixed-number’ sample-path methods. We present numerical results based on the new algorithm.", "The stochastic root-finding problem is that of finding a zero of a vector-valued function known only through a stochastic simulation. The simulation-optimization problem is that of locating a real-valued function's minimum, again with only a stochastic simulation that generates function estimates. Retrospective approximation (RA) is a sample-path technique for solving such problems, where the solution to the underlying problem is approached via solutions to a sequence of approximate deterministic problems, each of which is generated using a specified sample size, and solved to a specified error tolerance. Our primary focus, in this paper, is providing guidance on choosing the sequence of sample sizes and error tolerances in RA algorithms. We first present an overview of the conditions that guarantee the correct convergence of RA's iterates. Then we characterize a class of error-tolerance and sample-size sequences that are superior to others in a certain precisely defined sense. We also identify and recommend members of this class and provide a numerical example illustrating the key results." ] }
1504.04049
1511263746
We consider distributed optimization where @math nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration @math , performs an update (is active) with probability @math , and stays idle (is inactive) with probability @math . Whenever active, each node performs an update by weight-averaging its solution estimate with the estimates of its active neighbors, taking a negative gradient step with respect to its local cost, and performing a projection onto the constraint set; inactive nodes perform no updates. Assuming that nodes' local costs are strongly convex, with Lipschitz continuous gradients, we show that, as long as activation probability @math grows to one asymptotically, our algorithm converges in the mean square sense (MSS) to the same solution as the standard distributed gradient method, i.e., as if all the nodes were active at all iterations. Moreover, when @math grows to one linearly, with an appropriately set convergence factor, the algorithm has a linear MSS convergence, with practically the same factor as the standard distributed gradient method. Simulations on both synthetic and real world data sets demonstrate that, when compared with the standard distributed gradient method, the proposed algorithm significantly reduces the overall number of per-node communications and per-node gradient evaluations (computational cost) for the same required accuracy.
References @cite_19 @cite_5 consider a trust region framework and assume bounded sample sizes, but, differently from our paper and @cite_23 @cite_3 @cite_48 @cite_28 @cite_40 , they allow the sample size both to increase and to decrease at each iteration. The paper chooses a sample size at each iteration such that a balance is achieved between the decrease of the cost function and the width of an associated confidence interval. Reference @cite_41 proposes a schedule sequence in the monotone line search framework which also allows the sample size both increase and decrease at each iteration; paper @cite_17 extends the results in @cite_41 to a non-monotone line search.
{ "cite_N": [ "@cite_28", "@cite_48", "@cite_41", "@cite_3", "@cite_19", "@cite_40", "@cite_23", "@cite_5", "@cite_17" ], "mid": [ "2155559091", "2041159855", "2089630895", "2045192889", "1521150840", "2026272724", "1751687266", "2081280387", "2029633462" ], "abstract": [ "The Simulation-Optimization (SO) problem is a constrained optimization problem where the objective function is observed with error, usually through an oracle such as a simulation. Retrospective Approximation (RA) is a general technique that can be used to solve SO problems. In RA, the solution to the SO problem is approached using solutions to a sequence of approximate problems, each of which is generated using a specified sample size and solved to a specified error tolerance. In this paper, our focus is parameter choice in RA algorithms, where the term parameter is broadly interpreted. Specifically, we present (i) conditions that guarantee convergence of estimated solutions to the true solution; (ii) convergence properties of the sample-size and error-tolerance sequences that ensure that the sequence of estimated solutions converge to the true solution in an optimal fashion; and (iii) a numerical procedure that efficiently solves the generated approximate problems for one-dimensional SO.", "We consider a class of stochastic nonlinear programs for which an approximation to a locally optimal solution is specified in terms of a fractional reduction of the initial cost error. We show that such an approximate solution can be found by approximately solving a sequence of sample average approximations. The key issue in this approach is the determination of the required sequence of sample average approximations as well as the number of iterations to be carried out on each sample average approximation in this sequence. We show that one can express this requirement as an idealized optimization problem whose cost function is the computing work required to obtain the required error reduction. The specification of this idealized optimization problem requires the exact knowledge of a few problems and algorithm parameters. Since the exact values of these parameters are not known, we use estimates, which can be updated as the computation progresses. We illustrate our approach using two numerical examples from structural engineering design.", "Minimization of unconstrained objective functions in the form of mathematical expectation is considered. The Sample Average Approximation (SAA) method transforms the expectation objective function into a real-valued deterministic function using a large sample and thus deals with deterministic function minimization. The main drawback of this approach is its cost. A large sample of the random variable that defines the expectation must be taken in order to get a reasonably good approximation and thus the sample average approximation method requires a very large number of function evaluations. We present a line search strategy that uses variable sample size and thus makes the process significantly cheaper. Two measures of progress-lack of precision and a decrease of function value are calculated at each iteration. Based on these two measures a new sample size is determined. The rule we present allows us to increase or decrease the sample size at each iteration until we reach some neighborhood of the solution. An additional safeguard check is performed to avoid unproductive sample decrease. Eventually the maximal sample size is reached so that the variable sample size strategy generates a solution of the same quality as the SAA method but with a significantly smaller number of function evaluations. The algorithm is tested on a couple of examples, including the discrete choice problem.", "The sample-path method is one of the most important tools in simulation-based optimization. The basic idea of the method is to approximate the expected simulation output by the average of sample observations with a common random number sequence. In this paper, we describe a new variant of Powell’s unconstrained optimization by quadratic approximation (UOBYQA) method, which integrates a Bayesian variable-number sample-path (VNSP) scheme to choose appropriate number of samples at each iteration. The statistically accurate scheme determines the number of simulation runs, and guarantees the global convergence of the algorithm. The VNSP scheme saves a significant amount of simulation operations compared to general purpose ‘fixed-number’ sample-path methods. We present numerical results based on the new algorithm.", "This work is concerned with the study of nonlinear nonconvex stochastic programming, in particular in the context of trust-region approaches. We first explore how to exploit the structure of multistage stochastic nonlinear programs with linear constraints, in the framework of primal-dual interior point methods. We next study consistency of sample average approximations (SAA) for general nonlinear stochastic programs. We also develop a new algorithm to solve the SAA problem, using the statistical inference information to reduce numercial costs, by means of an internal variable sample size strategy. We finally assess the numerical efficiency of the proposed method for the estimation of discrete choice models, more precisely mixed logit models, using our software AMLET, written for this purpose.", "The stochastic root-finding problem is that of finding a zero of a vector-valued function known only through a stochastic simulation. The simulation-optimization problem is that of locating a real-valued function's minimum, again with only a stochastic simulation that generates function estimates. Retrospective approximation (RA) is a sample-path technique for solving such problems, where the solution to the underlying problem is approached via solutions to a sequence of approximate deterministic problems, each of which is generated using a specified sample size, and solved to a specified error tolerance. Our primary focus, in this paper, is providing guidance on choosing the sequence of sample sizes and error tolerances in RA algorithms. We first present an overview of the conditions that guarantee the correct convergence of RA's iterates. Then we characterize a class of error-tolerance and sample-size sequences that are superior to others in a certain precisely defined sense. We also identify and recommend members of this class and provide a numerical example illustrating the key results.", "Many structured data-fitting applications require the solution of an optimization problem involving a sum over a potentially large number of measurements. Incremental gradient algorithms offer inexpensive iterations by sampling a subset of the terms in the sum; these methods can make great progress initially, but often slow as they approach a solution. In contrast, full-gradient methods achieve steady convergence at the expense of evaluating the full objective and gradient on each iteration. We explore hybrid methods that exhibit the benefits of both approaches. Rate-of-convergence analysis shows that by controlling the sample size in an incremental-gradient algorithm, it is possible to maintain the steady convergence rates of full-gradient methods. We detail a practical quasi-Newton implementation based on this approach. Numerical experiments illustrate its potential benefits.", "Researchers and analysts are increasingly using mixed logit models for estimating responses to forecast demand and to determine the factors that affect individual choices. However the numerical cost associated to their evaluation can be prohibitive, the inherent probability choices being represented by multidimensional integrals. This cost remains high even if Monte Carlo or quasi-Monte Carlo techniques are used to estimate those integrals. This paper describes a new algorithm that uses Monte Carlo approximations in the context of modern trust-region techniques, but also exploits accuracy and bias estimators to considerably increase its computational efficiency. Numerical experiments underline the importance of the choice of an appropriate optimisation technique and indicate that the proposed algorithm allows substantial gains in time while delivering more information to the practitioner. Copyright Springer-Verlag Berlin Heidelberg 2006", "Nonmonotone line search methods for unconstrained minimization with the objective functions in the form of mathematical expectation are considered. The objective function is approximated by the sample average approximation (SAA) with a large sample of fixed size. The nonmonotone line search framework is embedded with a variable sample size strategy such that different sample size at each iteration allow us to reduce the cost of the sample average approximation. The variable sample scheme we consider takes into account the decrease in the approximate objective function and the quality of the approximation of the objective function at each iteration and thus the sample size may increase or decrease at each iteration. Nonmonotonicity of the line search combines well with the variable sample size scheme as it allows more freedom in choosing the search direction and the step size while the sample size is not the maximal one and increases the chances of finding a global solution. Eventually the maximal sample size is used so the variable sample size strategy generates the solution of the same quality as the SAA method but with significantly smaller number of function evaluations. Various nonmonotone strategies are compared on a set of test problems." ] }
1504.04049
1511263746
We consider distributed optimization where @math nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration @math , performs an update (is active) with probability @math , and stays idle (is inactive) with probability @math . Whenever active, each node performs an update by weight-averaging its solution estimate with the estimates of its active neighbors, taking a negative gradient step with respect to its local cost, and performing a projection onto the constraint set; inactive nodes perform no updates. Assuming that nodes' local costs are strongly convex, with Lipschitz continuous gradients, we show that, as long as activation probability @math grows to one asymptotically, our algorithm converges in the mean square sense (MSS) to the same solution as the standard distributed gradient method, i.e., as if all the nodes were active at all iterations. Moreover, when @math grows to one linearly, with an appropriately set convergence factor, the algorithm has a linear MSS convergence, with practically the same factor as the standard distributed gradient method. Simulations on both synthetic and real world data sets demonstrate that, when compared with the standard distributed gradient method, the proposed algorithm significantly reduces the overall number of per-node communications and per-node gradient evaluations (computational cost) for the same required accuracy.
Reference @cite_23 is closest to our paper within this thread of works, and our work mainly draws inspiration from it. The authors consider a bounded sample size, as we do here. They consider both deterministic and stochastic sampling and determine the increase of the sample size along iterations such that the algorithm attains (almost) the same rate as if the full sample size was used at all iterations. A major difference of @cite_23 with respect to the current paper is that they are not concerned with the networked scenario, i.e., therein a central entity works with the variable (increasing) sample size. This setup is very different from ours as it has no problem dimension of propagating information across the networked nodes -- the dimension present in distributed multi-agent optimization.
{ "cite_N": [ "@cite_23" ], "mid": [ "1751687266" ], "abstract": [ "Many structured data-fitting applications require the solution of an optimization problem involving a sum over a potentially large number of measurements. Incremental gradient algorithms offer inexpensive iterations by sampling a subset of the terms in the sum; these methods can make great progress initially, but often slow as they approach a solution. In contrast, full-gradient methods achieve steady convergence at the expense of evaluating the full objective and gradient on each iteration. We explore hybrid methods that exhibit the benefits of both approaches. Rate-of-convergence analysis shows that by controlling the sample size in an incremental-gradient algorithm, it is possible to maintain the steady convergence rates of full-gradient methods. We detail a practical quasi-Newton implementation based on this approach. Numerical experiments illustrate its potential benefits." ] }
1504.03504
2952320381
Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of "best views" are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection -- matching) is pragmatic but also problematic because the "best views" are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of "best views" and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics.
Sketch based shape retrieval has received many interests for years @cite_1 . In this section we review three key components in sketch based shape retrieval: public available datasets, features, and similarity learning.
{ "cite_N": [ "@cite_1" ], "mid": [ "2075597533" ], "abstract": [ "As the number of 3D models available on the Web grows, there is an increasing need for a search engine to help people find them. Unfortunately, traditional text-based search techniques are not always effective for 3D data. In this article, we investigate new shape-based search methods. The key challenges are to develop query methods simple enough for novice users and matching algorithms robust enough to work for arbitrary polygonal models. We present a Web-based search engine system that supports queries based on 3D sketches, 2D sketches, 3D models, and or text keywords. For the shape-based queries, we have developed a new matching algorithm that uses spherical harmonics to compute discriminating similarity measures without requiring repair of model degeneracies or alignment of orientations. It provides 46 to 245p better performance than related shape-matching methods during precision--recall experiments, and it is fast enough to return query results from a repository of 20,000 models in under a second. The net result is a growing interactive index of 3D models available on the Web (i.e., a Google for 3D models)." ] }
1504.03504
2952320381
Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of "best views" are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection -- matching) is pragmatic but also problematic because the "best views" are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of "best views" and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics.
2D sketches have been adopted as input in many systems @cite_29 . However, the large scale collections are available only recently. Eitz @cite_16 collected sketches based on the PSB dataset. Li @cite_0 organized the sketches collected by @cite_4 in their SBSR challenge.
{ "cite_N": [ "@cite_0", "@cite_29", "@cite_16", "@cite_4" ], "mid": [ "2005681003", "2101025736", "2127934090", "1972420097" ], "abstract": [ "Sketch-based 3D shape retrieval has become an important research topic in content-based 3D object retrieval. To foster this research area, two Shape Retrieval Contest (SHREC) tracks on this topic have been organized by us in 2012 and 2013 based on a small-scale and large-scale benchmarks, respectively. Six and five (nine in total) distinct sketch-based 3D shape retrieval methods have competed each other in these two contests, respectively. To measure and compare the performance of the top participating and other existing promising sketch-based 3D shape retrieval methods and solicit the state-of-the-art approaches, we perform a more comprehensive comparison of fifteen best (four top participating algorithms and eleven additional state-of-the-art methods) retrieval methods by completing the evaluation of each method on both benchmarks. The benchmarks, results, and evaluation tools for the two tracks are publicly available on our websites [1,2].", "This paper presents a unified framework for 3D shape retrieval. The method supports multimodal queries (2D images, sketches, 3D objects) by introducing a novel view-based approach able to handle the different types of multimedia data. More specifically, a set of 2D images (multi-views) are automatically generated from a 3D object, by taking views from uniformly distributed viewpoints. For each image, a set of 2D rotation-invariant shape descriptors is produced. The global shape similarity between two 3D models is achieved by applying a novel matching scheme, which effectively combines the information extracted from the multi-view representation. The experimental results prove that the proposed method demonstrates superior performance over other well-known state-of-the-art approaches.", "We develop a system for 3D object retrieval based on sketched feature lines as input. For objective evaluation, we collect a large number of query sketches from human users that are related to an existing data base of objects. The sketches turn out to be generally quite abstract with large local and global deviations from the original shape. Based on this observation, we decide to use a bag-of-features approach over computer generated line drawings of the objects. We develop a targeted feature transform based on Gabor filters for this system. We can show objectively that this transform is better suited than other approaches from the literature developed for similar tasks. Moreover, we demonstrate how to optimize the parameters of our, as well as other approaches, based on the gathered sketches. In the resulting comparison, our approach is significantly better than any other system described so far.", "Humans have used sketching to depict our visual world since prehistoric times. Even today, sketching is possibly the only rendering technique readily available to all humans. This paper is the first large scale exploration of human sketches. We analyze the distribution of non-expert sketches of everyday objects such as 'teapot' or 'car'. We ask humans to sketch objects of a given category and gather 20,000 unique sketches evenly distributed over 250 object categories. With this dataset we perform a perceptual study and find that humans can correctly identify the object category of a sketch 73 of the time. We compare human performance against computational recognition methods. We develop a bag-of-features sketch representation and use multi-class support vector machines, trained on our sketch dataset, to classify sketches. The resulting recognition method is able to identify unknown sketches with 56 accuracy (chance is 0.4 ). Based on the computational model, we demonstrate an interactive sketch recognition system. We release the complete crowd-sourced dataset of sketches to the community." ] }
1504.03504
2952320381
Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of "best views" are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection -- matching) is pragmatic but also problematic because the "best views" are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of "best views" and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics.
Boundary information together with internal structures are used for matching sketches against 2D projections. Therefore, a good representation of line drawing images is a key component for sketch based shape retrieval. Sketch representation such as shape context @cite_11 was proposed for image based shape retrieval. Furuya proposed BF-DSIFT feature, which is an extended SIFT feature with Bag-of-word method, to represent sketch images @cite_23 . One recent method is the Gabor local line based feature (GALIF) by Mathias , which builds on a bank of Gabor filters followed by a Bag-of-word method @cite_16 .
{ "cite_N": [ "@cite_16", "@cite_23", "@cite_11" ], "mid": [ "2127934090", "2151245724", "2057175746" ], "abstract": [ "We develop a system for 3D object retrieval based on sketched feature lines as input. For objective evaluation, we collect a large number of query sketches from human users that are related to an existing data base of objects. The sketches turn out to be generally quite abstract with large local and global deviations from the original shape. Based on this observation, we decide to use a bag-of-features approach over computer generated line drawings of the objects. We develop a targeted feature transform based on Gabor filters for this system. We can show objectively that this transform is better suited than other approaches from the literature developed for similar tasks. Moreover, we demonstrate how to optimize the parameters of our, as well as other approaches, based on the gathered sketches. In the resulting comparison, our approach is significantly better than any other system described so far.", "Our previous shape-based 3D model retrieval algorithm compares 3D shapes by using thousands of local visual features per model. A 3D model is rendered into a set of depth images, and from each image, local visual features are extracted by using the Scale Invariant Feature Transform (SIFT) algorithm by Lowe. To efficiently compare among large sets of local features, the algorithm employs bag-of-features approach to integrate the local features into a feature vector per model. The algorithm outperformed other methods for a dataset containing highly articulated yet geometrically simple 3D models. For a dataset containing diverse and detailed models, the method did only as well as other methods. This paper proposes an improved algorithm that performs equal or better than our previous method for both articulated and rigid but geometrically detailed models. The proposed algorithm extracts much larger number of local visual features by sampling each depth image densely and randomly. To contain computational cost, the method utilizes GPU for SIFT feature extraction and an efficient randomized decision tree for encoding SIFT features into visual words. Empirical evaluation showed that the proposed method is very fast, yet significantly outperforms our previous method for rigid and geometrically detailed models. For the simple yet articulated models, the performance was virtually unchanged.", "We present a novel approach to measuring similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by: (1) solving for correspondences between points on the two shapes; (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; regularized thin-plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning transform. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits, and the COIL data set." ] }
1504.03504
2952320381
Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of "best views" are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection -- matching) is pragmatic but also problematic because the "best views" are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of "best views" and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics.
A Siamese network @cite_8 is a particular neural network architecture consisting of two identical sub-convolutional networks, which is used in a weakly supervised metric learning setting. The goal of the network is to make the output vectors similar if input pairs are labeled as similar, and dissimilar for the input pairs that are labeled as dissimilar. Recently, the Siamese network has been applied to text classification @cite_25 and speech feature classification @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_25", "@cite_8" ], "mid": [ "2099797668", "1610356397", "" ], "abstract": [ "Speech conveys different yet mixed information ranging from linguistic to speaker-specific components, and each of them should be exclusively used in a specific task. However, it is extremely difficult to extract a specific information component given the fact that nearly all existing acoustic representations carry all types of speech information. Thus, the use of the same representation in both speech and speaker recognition hinders a system from producing better performance due to interference of irrelevant information. In this paper, we present a deep neural architecture to extract speaker-specific information from MFCCs. As a result, a multi-objective loss function is proposed for learning speaker-specific characteristics and regularization via normalizing interference of non-speaker related information and avoiding information loss. With LDC benchmark corpora and a Chinese speech corpus, we demonstrate that a resultant speaker-specific representation is insensitive to text languages spoken and environmental mismatches and hence outperforms MFCCs and other state-of-the-art techniques in speaker recognition. We discuss relevant issues and relate our approach to previous work.", "Traditional text similarity measures consider each term similar only to itself and do not model semantic relatedness of terms. We propose a novel discriminative training method that projects the raw term vectors into a common, low-dimensional vector space. Our approach operates by finding the optimal matrix to minimize the loss of the pre-selected similarity function (e.g., cosine) of the projected vectors, and is able to efficiently handle a large number of training examples in the high-dimensional space. Evaluated on two very different tasks, cross-lingual document retrieval and ad relevance measure, our method not only outperforms existing state-of-the-art approaches, but also achieves high accuracy at low dimensions and is thus more efficient.", "" ] }
1504.03196
1519898077
We introduce and analyse a class of fragmentation-coalescence processes defined on finite systems of particles organised into clusters. Coalescent events merge multiple clusters simultaneously to form a single larger cluster, while fragmentation breaks up a cluster into a collection of singletons. Under mild conditions on the coalescence rates, we show that the distribution of cluster sizes becomes non- random in the thermodynamic limit. Moreover, we discover that in the limit of small fragmentation rate these processes exhibit self-organised criticality in the cluster size distribution, with universal exponent 3 2.
The critical behaviour of the stationary cluster size distribution, found in Theorem , has also been seen in a different form of fragmentation-coalescence process studied by Bressaud and Fournier @cite_2 . This was a mean-field approximation to a one-dimensional forest-fire model on @math . Edges (each of weight one) are glued together if the site between them is occupied, which happens at rate 1, and clusters of @math edges have all sites removed at rate @math . A mean-field approximation to this model, ignoring correlations, follows Then, they found that the stationary cluster size distribution converges weakly to @math as @math ( fragmentation' rate converges to zero) where which matches the distribution in Theorem in the case where @math .
{ "cite_N": [ "@cite_2" ], "mid": [ "1585432259" ], "abstract": [ "We report a remarkable universality in the patterns of violence arising in three high-profile ongoing wars, and in global terrorism. Our results suggest that these quite different conflict arenas currently feature a common type of enemy, i.e. the various insurgent forces are beginning to operate in a similar way regardless of their underlying ideologies, motivations and the terrain in which they operate. We provide a microscopic theory to explain our main observations. This theory treats the insurgent force as a generic, self-organizing system which is dynamically evolving through the continual coalescence and fragmentation of its constituent groups." ] }
1504.03196
1519898077
We introduce and analyse a class of fragmentation-coalescence processes defined on finite systems of particles organised into clusters. Coalescent events merge multiple clusters simultaneously to form a single larger cluster, while fragmentation breaks up a cluster into a collection of singletons. Under mild conditions on the coalescence rates, we show that the distribution of cluster sizes becomes non- random in the thermodynamic limit. Moreover, we discover that in the limit of small fragmentation rate these processes exhibit self-organised criticality in the cluster size distribution, with universal exponent 3 2.
Finally we mention an important separate class of fragmentation-coalescence processes. The models discussed so far all concern large but finite particle systems, but it is possible to define (non-size-biased) fragmentation and coalescence processes on the partitions of @math , as introduced in @cite_3 . The class of processes we work with here can be seen as a natural finite counterpart to these models.
{ "cite_N": [ "@cite_3" ], "mid": [ "2147092133" ], "abstract": [ "We define and study a family of Markov processes with state space the compact set of all partitions of @math that we call exchangeable fragmentation-coalescence processes. They can be viewed as a combination of homogeneous fragmentation as defined by Bertoin and of homogenous coalescence as defined by Pitman and Schweinsberg or Mohle and Sagitov. We show that they admit a unique invariant probability measure and we study some properties of their paths and of their equilibrium measure." ] }
1504.03287
2952886378
Although the security properties of 3G and 4G mobile networks have significantly improved by comparison with 2G (GSM), significant shortcomings remain with respect to user privacy. A number of possible modifications to 2G, 3G and 4G protocols have been proposed designed to provide greater user privacy; however, they all require significant modifications to existing deployed infrastructures, which are almost certainly impractical to achieve in practice. In this article we propose an approach which does not require any changes to the existing deployed network infrastructures or mobile devices, but offers improved user identity protection over the air interface. The proposed scheme makes use of multiple IMSIs for an individual USIM to offer a degree of pseudonymity for a user. The only changes required are to the operation of the authentication centre in the home network and to the USIM, and the scheme could be deployed immediately since it is completely transparent to the existing mobile telephony infrastructure. We present two different approaches to the use and management of multiple IMSIs.
To the best of the authors' knowledge, no user privacy enhancing scheme for mobile telephony has previously been proposed that does not require changes to the existing networks. While other authors observe that significant changes to widely deployed infrastructure are unlikely to be feasible @cite_6 @cite_12 , realistic and practical proposals have not been made. @cite_6 have proposed a scheme to improve user identity confidentiality in the LTE network. Their scheme involves significant changes to the air interface protocol. They propose the use of a frequently changing dynamic mobile subscriber identity (DMSI) instead of the IMSI across the air interface. The DMSIs are managed by the home network and the USIM. However, the use of the DMSI imposes changes in the protocol messages, mobile equipment, and the serving network. K ien @cite_12 has recently proposed a privacy enhanced mutual authentication scheme for LTE. Although the author claims to use existing signalling mechanisms, the author introduces identity based encryption to encrypt the IMSI when sent across the air interface.
{ "cite_N": [ "@cite_12", "@cite_6" ], "mid": [ "2001135236", "2112207969" ], "abstract": [ "In this paper we propose a way to enhance the identity privacy in LTE LTE-Advanced systems. This is achieved while minimizing the impact on the existing E-UTRAN system. This is important since proposals to modify a widely deployed infrastructure must be cost effective, both in terms of design changes and in terms of deployment cost. In our proposal, the user equipment (UE) identifies itself with a dummy identity, consisting only of the mobile nation code and the mobile network code. We use the existing signalling mechanisms in a novel way to request a special encrypted identity information element. This element is protected using identity-based encryption (IBE), with the home network (HPLMN) as the private key generator (PKG) and the visited network (VPLMN) and the private key owner. This allows the UE to protect the identity (IMSI) from external parties. To avoid tracking the “encrypted IMSI” also include a random element. We use this as an opportunity to let the UE include as subscriber-side random challenge to the network. The challenge will be bounded to the EPS authentication vector (EPS AV) and will allow use to construct an online 3-way security context. To complete our proposal we also strengthen the requirements on the use of the temporary identifier (M-TMSI).", "Identity privacy is a security issue that is crucial for the users of a cellular network. Knowledge of the permanent identity of a user may allow an adversary to track and amass comprehensive profiles about individuals. Such profiling may expose an individual to various kind of unanticipated risks, and above all may deprive an individual of his right to privacy. With the introduction of sensitive services like online banking, shopping, etc. through cellular phones, identity privacy has now become a bigger security issue. In GSM and UMTS, the problem of user identity privacy vulnerability is proven to exist. In both these systems, there are situations where the permanent identity of a subscriber may get compromised. Long Term Evolution (LTE), which evolved from GSM and UMTS, is proposed by 3GPP for inclusion into the fourth generation of cellular networks. Although security of LTE has evolved from the security of GSM and UMTS, due to different architectural and business requirements of fourth generation systems, LTE security is substantially different and improved compared to its predecessors. However, the issue of identity privacy vulnerability continue to exist in LTE. In this paper, we discuss how the security architecture of LTE deals with identity privacy. We also discuss a possible solution that may be utilised to overcome the problem of user identity privacy in LTE." ] }
1504.03287
2952886378
Although the security properties of 3G and 4G mobile networks have significantly improved by comparison with 2G (GSM), significant shortcomings remain with respect to user privacy. A number of possible modifications to 2G, 3G and 4G protocols have been proposed designed to provide greater user privacy; however, they all require significant modifications to existing deployed infrastructures, which are almost certainly impractical to achieve in practice. In this article we propose an approach which does not require any changes to the existing deployed network infrastructures or mobile devices, but offers improved user identity protection over the air interface. The proposed scheme makes use of multiple IMSIs for an individual USIM to offer a degree of pseudonymity for a user. The only changes required are to the operation of the authentication centre in the home network and to the USIM, and the scheme could be deployed immediately since it is completely transparent to the existing mobile telephony infrastructure. We present two different approaches to the use and management of multiple IMSIs.
Dupr 'e @cite_25 presents a process to control a subscriber identity module (SIM) for mobile phone systems. He provides generic guidance regarding the transmission of control information from the network to the SIM. The schemes described in this paper extends Dupr 'e's idea in a more concrete way.
{ "cite_N": [ "@cite_25" ], "mid": [ "344917603" ], "abstract": [ "A process to control a subscriber identity module (SIM) in mobile phone systems. The process consists of the mobile phone network sending one or more specific control values to the subscriber identity module that initiate specific actions or procedures within the subscriber identity module. Certain random values sent by the mobile phone network to the subscriber identity module for regular authentication purposes are used as control values (Control RANDs)." ] }
1504.03213
2952463494
Mobile network operators are facing the difficult task of significantly increasing capacity to meet projected demand while keeping CAPEX and OPEX down. We argue that infrastructure sharing is a key consideration in operators' planning of the evolution of their networks, and that such planning can be viewed as a stage in the cognitive cycle. In this paper, we present a framework to model this planning process while taking into account both the ability to share resources and the constraints imposed by competition regulation (the latter quantified using the Herfindahl index). Using real-world demand and deployment data, we find that the ability to share infrastructure essentially moves capacity from rural, sparsely populated areas (where some of the current infrastructure can be decommissioned) to urban ones (where most of the next-generation base stations would be deployed), with significant increases in resource efficiency. Tight competition regulation somewhat limits the ability to share but does not entirely jeopardize those gains, while having the secondary effect of encouraging the wider deployment of next-generation technologies.
The classic network planning problem usually considers a single operator that aims at minimizing operational costs for a specific technology (e.g. 3G, LTE) while maintaining acceptable users satisfaction both in terms of coverage and capacity. The research following this line is vast and includes various aspects. For example, in @cite_28 @cite_25 @cite_14 @cite_13 @cite_26 @cite_2 @cite_1 the authors study the optimization of base station location in an area of interest. Some of these works @cite_28 @cite_25 @cite_14 @cite_13 deal with 3G systems and they are based on meta-heuristics aiming at minimizing the number of base stations to be deployed. Other more recent works @cite_26 @cite_2 @cite_1 have focused on the same objective using similar approaches but on LTE networks.The work in @cite_2 in particular uses a model based on stochastic geometry, and the coverage probability as the metric to optimize. The problem addressed in our paper is fundamentally different from the ones addressed by the aforementioned works since it includes sharing of already existing infrastructure by more than one operator and considers the competition regulation impact, modeled similarly as imposed by the Irish regulator recently @cite_11 , on network planning.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_28", "@cite_1", "@cite_2", "@cite_13", "@cite_25", "@cite_11" ], "mid": [ "2125725268", "", "2123489071", "2061351551", "2167353407", "", "2132228824", "" ], "abstract": [ "Universal mobile telecommunication system (UMTS) networks should be deployed according to cost-effective strategies that optimize a cost objective and satisfy target quality-of-service (QoS) requirements. In this paper, we propose novel algorithms for joint uplink downlink UMTS radio planning with the objective of minimizing total power consumption in the network. Specifically, we define two components of the radio planning problem: 1) continuous-based site placement and 2) integer-based site selection. In the site-placement problem, our goal is to find the optimal locations of UMTS base stations (BSs) in a certain geographic area with a given user distribution to minimize the total power expenditure such that a satisfactory level of downlink and uplink signal-to-interference ratio (SIR) is maintained with bounded outage constraints. We model the problem as a constrained optimization problem with SIR-based uplink and downlink power control scheme. An algorithm is proposed and implemented using pattern search techniques for derivative-free optimization with augmented Lagrange multiplier estimates to support general constraints. In the site-selection problem, we aim to select the minimum set of BSs from a fixed set of candidate sites that satisfies quality and outage constraints. We develop an efficient elimination algorithm by proposing a method for classifying BSs that are critical for network coverage and QoS. Finally, the problem is reformulated to take care of location constraints whereby the placement of BSs in a subset of the deployment area is not permitted due to, e.g., private property limitations or electromagnetic radiation constraints. Experimental results and optimal tradeoff curves are presented and analyzed for various scenarios.", "", "Classical coverage models, adopted for second-generation cellular systems, are not suited for planning Universal Mobile Telecommunication System (UMTS) base station (BS) location because they are only based on signal predictions and do not consider the traffic distribution, the signal quality requirements, and the power control (PC) mechanism. We propose discrete optimization models and algorithms aimed at supporting the decisions in the process of planning where to locate new BSs. These models consider the signal-to-interference ratio as quality measure and capture at different levels of detail the signal quality requirements and the specific PC mechanism of the wideband CDMA air interface. Given that these UMTS BS location models are nonpolynomial (NP)-hard, we propose two randomized greedy procedures and a tabu search algorithm for the uplink (mobile to BS) direction which is the most stringent one from the traffic point of view in the presence of balanced connections such as voice calls. The different models, which take into account installation costs, signal quality and traffic coverage, and the corresponding algorithms, are compared on families of small to large-size instances generated by using classical propagation models.", "Base station (BS) deployment in cellular networks is one of the fundamental problems in network design. This paper proposes a novel method for the cell planning problem for fourth-generation (4G) cellular networks using metaheuristic algorithms. In this approach, we aim to satisfy both cell coverage and capacity constraints simultaneously by formulating an optimization problem that captures practical planning aspects. The starting point of the planning process is defined through a dimensioning exercise that captures both coverage and capacity constraints. Afterward, we implement a metaheuristic algorithm based on swarm intelligence (e.g., particle swarm optimization or the recently proposed gray-wolf optimizer) to find suboptimal BS locations that satisfy both problem constraints in the area of interest, which can be divided into several subareas with different spatial user densities. Subsequently, an iterative approach is executed to eliminate eventual redundant BSs. We also perform Monte Carlo simulations to study the performance of the proposed scheme and compute the average number of users in outage. Next, the problems of green planning with regard to temporal traffic variation and planning with location constraints due to tight limits on electromagnetic radiations are addressed, using the proposed method. Finally, in our simulation results, we apply our proposed approach for different scenarios with different subareas and user distributions and show that the desired network quality-of-service (QoS) targets are always reached, even for large-scale problems.", "The spatial structure of base stations (BSs) in cellular networks plays a key role in evaluating the downlink performance. In this paper, different spatial stochastic models (the Poisson point process (PPP), the Poisson hard-core process (PHCP), the Strauss process (SP), and the perturbed triangular lattice) are used to model the structure by fitting them to the locations of BSs in real cellular networks obtained from a public database. We provide two general approaches for fitting. One is fitting by the method of maximum pseudolikelihood. As for the fitted models, it is not sufficient to distinguish them conclusively by some classical statistics. We propose the coverage probability as the criterion for the goodness-of-fit. In terms of coverage, the SP provides a better fit than the PPP and the PHCP. The other approach is fitting by the method of minimum contrast that minimizes the average squared error of the coverage probability. This way, fitted models are obtained whose coverage performance matches that of the given data set very accurately. Furthermore, we introduce a novel metric, the deployment gain, and we demonstrate how it can be used to estimate the coverage performance and average rate achieved by a data set.", "", "The cell planning problem with capacity expansion is examined in wireless communications. The problem decides the location and capacity of each new base station to cover expanded and increased traffic demand. The objective is to minimize the cost of new base stations. The coverage by the new and existing base stations is constrained to satisfy a proper portion of traffic demands. The received signal power at the base station also has to meet the receiver sensitivity. The cell planning is formulated as an integer linear programming problem and solved by a tabu search algorithm. In the tabu search intensification by add and drop move is implemented by short-term memory embodied by two tabu lists. Diversification is designed to investigate proper capacities of new base stations and to restart the tabu search from new base station locations. Computational results show that the proposed tabu search is highly effective. A 10 cost reduction is obtained by the diversification strategies. The gap from the optimal solutions is approximately 1 spl sim 5 in problems that can be handled in appropriate time limits. The proposed tabu search also outperforms the parallel genetic algorithm. The cost reduction by the tabu search approaches 10 spl sim 20 in problems: with 2500 traffic demand areas (TDAs) in code division multiple access (CDMA).", "" ] }
1504.03213
2952463494
Mobile network operators are facing the difficult task of significantly increasing capacity to meet projected demand while keeping CAPEX and OPEX down. We argue that infrastructure sharing is a key consideration in operators' planning of the evolution of their networks, and that such planning can be viewed as a stage in the cognitive cycle. In this paper, we present a framework to model this planning process while taking into account both the ability to share resources and the constraints imposed by competition regulation (the latter quantified using the Herfindahl index). Using real-world demand and deployment data, we find that the ability to share infrastructure essentially moves capacity from rural, sparsely populated areas (where some of the current infrastructure can be decommissioned) to urban ones (where most of the next-generation base stations would be deployed), with significant increases in resource efficiency. Tight competition regulation somewhat limits the ability to share but does not entirely jeopardize those gains, while having the secondary effect of encouraging the wider deployment of next-generation technologies.
Resource sharing in inter-operator cellular networks, in fact, is another important aspect of our work and it has been studied in @cite_22 @cite_19 @cite_30 @cite_23 . @cite_22 the authors analyze feasible sharing options in the near-term in LTE using co-located and non co-located base stations. Authors in @cite_19 assess the benefit of sharing both infrastructure and spectrum, using real base station deployment data. @cite_30 the authors have studied sharing opportunities between two operators, by looking at spatial variation in demand peaks. All the results obtained in these works suggest that resource sharing, whether spectrum or infrastructure, increases the network capacity and the ability to satisfy users' requirements. However none of them addresses the problem of to efficiently plan a shared network composed by resources already deployed by existing operators. In our previous work @cite_23 we investigate the obtained by combining existing cellular networks considering the coverage redundancy in real deployments in Poland but, unlike this paper, our earlier work does not address capacity requirements nor regulatory constraints.
{ "cite_N": [ "@cite_30", "@cite_19", "@cite_22", "@cite_23" ], "mid": [ "2038000954", "2949778314", "2055515382", "" ], "abstract": [ "Network sharing is often hailed as a promising and cost-effective way to tackle the ever-increasing load of cellular networks. However, its actual effectiveness strongly depends on the correlation between the networks being joined -- intuitively, there is no benefit in joining two networks with exactly the same load and exactly the same deployment. In this paper, we analyse the deployment and traffic traces of two Irish operators to (i) study their correlation in space and time, and (ii) assess the potential benefit brought by network sharing. Through our analysis, we are able to show that network sharing is remarkably effective in making the load more regular over space, improving the operations and performance of cellular networks.", "As cellular networks are turning into a platform for ubiquitous data access, cellular operators are facing a severe data capacity crisis due to the exponential growth of traffic generated by mobile users. In this work, we investigate the benefits of sharing infrastructure and spectrum among two cellular operators. Specifically, we provide a multi-cell analytical model using stochastic geometry to identify the performance gain under different sharing strategies, which gives tractable and accurate results. To validate the performance using a realistic setting, we conduct extensive simulations for a multi-cell OFDMA system using real base station locations. Both analytical and simulation results show that even a simple cooperation strategy between two similar operators, where they share spectrum and base stations, roughly quadruples capacity as compared to the capacity of a single operator. This is equivalent to doubling the capacity per customer, providing a strong incentive for operators to cooperate, if not actually merge.", "Resource sharing among mobile network operators is a promising way to tackle growing data demand by increasing capacity and reducing costs of network infrastructure deployment and operation. In this work, we evaluate sharing options that range from simple approaches that are feasible in the near-term on traditional infrastructure to complex methods that require specialized virtualized infrastructure. We build a simulation testbed supporting two geographically overlapped 4G LTE macro cellular networks and model the sharing architecture process between the network operators. We compare Capacity Sharing (CS) and Spectrum Sharing (SS) on traditional infrastructure and Virtualized Spectrum Sharing (VSS) and Virtualized PRB Sharing (VPS) on virtualized infrastructure under light, moderate and heavy user loading scenarios in collocated and noncollocated E-UTRAN deployment topologies. We also study these sharing options in conservative and aggressive sharing participation modes. Based on simulation results, we conclude that CS, a generalization of traditional roaming, is the best performing and simplest option, SS is least effective and that VSS and VPS perform better than spectrum sharing with added complexity.", "" ] }
1504.03213
2952463494
Mobile network operators are facing the difficult task of significantly increasing capacity to meet projected demand while keeping CAPEX and OPEX down. We argue that infrastructure sharing is a key consideration in operators' planning of the evolution of their networks, and that such planning can be viewed as a stage in the cognitive cycle. In this paper, we present a framework to model this planning process while taking into account both the ability to share resources and the constraints imposed by competition regulation (the latter quantified using the Herfindahl index). Using real-world demand and deployment data, we find that the ability to share infrastructure essentially moves capacity from rural, sparsely populated areas (where some of the current infrastructure can be decommissioned) to urban ones (where most of the next-generation base stations would be deployed), with significant increases in resource efficiency. Tight competition regulation somewhat limits the ability to share but does not entirely jeopardize those gains, while having the secondary effect of encouraging the wider deployment of next-generation technologies.
Works such as @cite_9 focus on economic aspects of resource sharing from a network virtualization perspective. It describes the incentives operators have to pool their resources together. From an implementation perspective, the analysis of cooperative sharing arrangements presented in @cite_5 highlights the diversity of approaches currently being used in existing networks, their successes and failures. It points to a process of learning within industry as to which sharing modes allow for both competition and cooperative sharing to thrive. A study of Pakistan's experience of network sharing indicates the varying economic gains made in a still-developing market by adopting different sharing strategies @cite_20 .
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_20" ], "mid": [ "2276510973", "", "2149391228" ], "abstract": [ "This paper address issues about cooperation among and competition between mobile network operators. The starting point is to examine why and how operators share infrastructure for mobile communication services, so called network sharing. The paper analyzes drivers, benefits and obstacles of network cooperation. We also analyze how roles and responsibilities are distributed for the network related functions while concurrently operators compete for customers and have separate functionality for service provisioning, marketing, customer relation management, charging and billing. Next, we analyze how network sharing as such and strategies for network sharing have changed in Sweden from the year 2000 when the 3G licenses were awarded and up to the year 2010. Moreover, network sharing in Sweden is compared with India where the market situation is different, as the number of operators is four times more and the cooperation is organized in another way, with separate tower companies, which provides base stations sites where operators are tenants. Finally, we compare the network sharing cases with how mobile operators organize cooperation for mobile payments services. From our empirical data we can identify four different types of co-opetition among mobile operators. 1. A co-operative spirit with focus on working practices and or principles that will facilitate the common use of resources or solutions. 2. Infrastructure cooperation through a third party, e.g. a tower company or a SMS aggregator with the main objective to reduce costs or to provide a common solution. The operators have agreements with a third party but not with each other. 3. Infrastructure cooperation through a joint venture that is responsible for network deployment and operation. The driver is to achieve cost-savings. The operators have their own service provisioning, billing, customer relations management and compete for end-users. 4. Service and infrastructure cooperation through a joint venture that is fully responsible for providing the end-user service, in our case mobile payments. The main driver is to offer a payment solution common for all operators in order to complement or compete with solutions provided by banks or payment service providers.", "", "The cellular mobile industry in Pakistan has shown an unprecedented growth since the promulgation of Pakistan Telecommunication (Reorganization) Act of 1996. Over 90 million cellular mobile users and penetration grew to 55.6 and 4.8 million landlines connections provide a teledensity of 58.8 to the nation. The mobile networks provide coverage to over 90 percent of the population. During 2007–08 mobiles traffic exceeded 42 billion minutes while ARPU decreased to US @math 3.12 billion while the share of telecommunication sector in GDP was 2.0 . Telecom companies invested over US @math 2 billion during 2007–08 for expansion of its CMPak networks. The mobile sector paid over a billion dollars in taxes to the National Exchequer during the year 2007–08. The telecom sector received above US$ 1.438 billion FDI, i.e., 28 of the total FDI and helped create over one million jobs since the deregulation of the telecom sector began. The competitive pressures and decline in ARPU has increased the need for improving technical as well as economic efficiencies. Our analyses indicate there are serious economic efficiencies embedded in infrastructure sharing paradigm for mobile operators. Only the passive sharing of additional sites can yield CAPEX savings of over 5000 million US dollars and OPEX savings of these sites can yield another billion US dollars every year. It is thus concluded that the growing business model of decoupling the revenues from that of mobile traffic warrant serious consideration." ] }
1504.03410
1939575207
Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods.
Unsupervised methods only use the training data to learn hash functions that can encode input data points to binary codes. Notable examples in this category include Kernelized Locality-Sensitive Hashing @cite_13 , Semantic Hashing @cite_22 , graph-based hashing methods @cite_20 @cite_7 , and Iterative Quantization @cite_17 .
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "205159212", "2251864938", "2171790913", "", "2084363474" ], "abstract": [ "A dental model trimmer having an easily replaceable abrasive surfaced member. The abrasive surfaced member is contained within a housing and is releasably coupled onto a back plate assembly which is driven by a drive motor. The housing includes a releasably coupled cover plate providing access to the abrasive surfaced member. An opening formed in the cover plate exposes a portion of the abrasive surface so that a dental model workpiece can be inserted into the opening against the abrasive surface to permit work on the dental model workpiece. A tilting work table beneath the opening supports the workpiece during the operation. A stream of water is directed through the front cover onto the abrasive surface and is redirected against this surface by means of baffles positioned inside the cover plate. The opening includes a beveled boundary and an inwardly directed lip permitting angular manipulation of the workpiece, better visibility of the workpiece and maximum safety.", "Hashing is becoming increasingly popular for efficient nearest neighbor search in massive databases. However, learning short codes that yield good search performance is still a challenge. Moreover, in many cases real-world data lives on a low-dimensional manifold, which should be taken into account to capture meaningful nearest neighbors. In this paper, we propose a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To make such an approach computationally feasible, we utilize Anchor Graphs to obtain tractable low-rank adjacency matrices. Our formulation allows constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. Finally, we describe a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy. Experimental comparison with the other state-of-the-art methods on two large datasets demonstrates the efficacy of the proposed method.", "Fast retrieval methods are critical for large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sub-linear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several large-scale datasets, and show that it enables accurate and fast performance for example-based object classification, feature matching, and content-based retrieval.", "", "This paper addresses the problem of learning similarity-preserving binary codes for efficient retrieval in large-scale image collections. We propose a simple and efficient alternating minimization scheme for finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube. This method, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). Our experiments show that the resulting binary coding schemes decisively outperform several other state-of-the-art methods." ] }
1504.03410
1939575207
Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods.
Supervised methods try to leverage supervised information (e.g., class labels, pairwise similarities, or relative similarities of data points) to learn compact bitwise representations. Here are some representative examples in this category. Binary Reconstruction Embedding (BRE) @cite_15 learns hash functions by minimizing the reconstruction errors between the distances of data points and those of the corresponding hash codes. Minimal Loss Hashing (MLH) @cite_26 and its extension @cite_14 learn hash codes by minimizing hinge-like loss functions based on similarities or relative similarities of data points. Supervised Hashing with Kernels (KSH) @cite_8 is a kernel-based method that pursues compact binary codes to minimize the Hamming distances on similar pairs and maximize those on dissimilar pairs.
{ "cite_N": [ "@cite_14", "@cite_15", "@cite_26", "@cite_8" ], "mid": [ "2113307832", "2164338181", "2221852422", "1992371516" ], "abstract": [ "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.", "Fast retrieval methods are increasingly critical for many large-scale analysis tasks, and there have been several recent methods that attempt to learn hash functions for fast and accurate nearest neighbor searches. In this paper, we develop an algorithm for learning hash functions based on explicitly minimizing the reconstruction error between the original distances and the Hamming distances of the corresponding binary embeddings. We develop a scalable coordinate-descent algorithm for our proposed hashing objective that is able to efficiently learn hash functions in a variety of settings. Unlike existing methods such as semantic hashing and spectral hashing, our method is easily kernelized and does not require restrictive assumptions about the underlying distribution of the data. We present results over several domains to demonstrate that our method outperforms existing state-of-the-art techniques.", "We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.", "Recent years have witnessed the growing popularity of hashing in large-scale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance or often incur cumbersome model training. In this paper, we propose a novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing. The idea is to map the data to compact binary codes whose Hamming distances are minimized on similar pairs and simultaneously maximized on dissimilar pairs. Our approach is distinct from prior works by utilizing the equivalence between optimizing the code inner products and the Hamming distances. This enables us to sequentially and efficiently train the hash functions one bit at a time, yielding very short yet discriminative codes. We carry out extensive experiments on two image benchmarks with up to one million samples, demonstrating that our approach significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors, with accuracy gains ranging from 13 to 46 ." ] }
1504.03410
1939575207
Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods.
In most of the existing supervised hashing methods for images, input images are represented by some hand-crafted visual features (e.g. GIST @cite_19 ), before the projection and quantization steps to generate hash codes.
{ "cite_N": [ "@cite_19" ], "mid": [ "1566135517" ], "abstract": [ "In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category." ] }
1504.03410
1939575207
Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods.
On the other hand, we are witnessing dramatic progress in deep convolution networks in the last few years. Approaches based on deep networks have achieved state-of-the-art performance on image classification @cite_9 @cite_24 @cite_12 , object detection @cite_9 @cite_12 and other recognition tasks @cite_11 . The recent trend in convolution networks has been to increase the depth of the networks @cite_2 @cite_24 @cite_12 and the layer size @cite_6 @cite_12 . The success of deep-networks-based methods for images is mainly due to their power of automatically learning effective image representations. In this paper, we focus on a deep architecture tailored for learning-based hashing. Some parts of the proposed architecture are designed on the basis of @cite_2 that uses additional @math convolution layers to increase the representational power of the networks.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_24", "@cite_2", "@cite_12", "@cite_11" ], "mid": [ "2618530766", "1487583988", "2962835968", "", "2950179405", "2145287260" ], "abstract": [ "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance." ] }
1504.03410
1939575207
Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods.
Without using hand-crafted image features, the recently proposed CNNH @cite_21 decomposes the hash learning process into a stage of learning approximate hash codes, followed by a deep-networks-based stage of simultaneously learning image features and hash functions, with the raw image pixels as input. However, a limitation in CNNH is that the learned image representation (in Stage 2) cannot be used to improve the learning of approximate hash codes, although the learned approximate hash codes can be used to guide the learning of image representation. In the proposed method, we learn the image representation and the hash codes in one stage, such that these two tasks have interaction and help each other forward.
{ "cite_N": [ "@cite_21" ], "mid": [ "2293824885" ], "abstract": [ "Hashing is a popular approximate nearest neighbor search approach for large-scale image retrieval. Supervised hashing, which incorporates similarity dissimilarity information on entity pairs to improve the quality of hashing function learning, has recently received increasing attention. However, in the existing supervised hashing methods for images, an input image is usually encoded by a vector of handcrafted visual features. Such hand-crafted feature vectors do not necessarily preserve the accurate semantic similarities of images pairs, which may often degrade the performance of hashing function learning. In this paper, we propose a supervised hashing method for image retrieval, in which we automatically learn a good image representation tailored to hashing as well as a set of hash functions. The proposed method has two stages. In the first stage, given the pairwise similarity matrix S over training images, we propose a scalable coordinate descent method to decompose S into a product of HHT where H is a matrix with each of its rows being the approximate hash code associated to a training image. In the second stage, we propose to simultaneously learn a good feature representation for the input images as well as a set of hash functions, via a deep convolutional network tailored to the learned hash codes in H and optionally the discrete class labels of the images. Extensive empirical evaluations on three benchmark datasets with different kinds of images show that the proposed method has superior performance gains over several state-of-the-art supervised and unsupervised hashing methods." ] }
1504.03004
2949344860
The Locator ID Separation Protocol (LISP) limits the growth of the Default-Free Zone routing tables by creating a highly aggregatable and quasi-static Internet core. However, LISP pushes the forwarding state to edge routers whose timely operation relies on caching of location to identity bindings. In this paper we develop an analytical model to study the asymptotic scalability of the LISP cache. Under the assumptions that (i) long-term popularity can be modeled as a constant Generalized Zipf distribution and (ii) temporal locality is predominantly determined by long-term popularity, we find that the scalability of the LISP cache is O(1) with respect to the amount of prefixes (Internet growth) and users (growth of the LISP site). We validate the model and discuss the accuracy of our assumptions using several one-day-long packet traces.
Denning was first to recognize the phenomenon of temporal locality in his definition of the working-set @cite_6 and together with Schwartz established the fundamental properties that characterize it @cite_20 . Although initially designed for the analysis of page caching in operating systems, the ideas were later reused in other fields including Web page and route caching.
{ "cite_N": [ "@cite_20", "@cite_6" ], "mid": [ "2052040062", "2142098074" ], "abstract": [ "A program's working set W ( t, T ) at time t is the set of distinct pages among the T most recently referenced pages. Relations between the average working-set size, the missing-page rate, and the interreference-interval distribution may be derived both from time-average definitions and from ensemble-average (statistical) definitions. An efficient algorithm for estimating these quantities is given. The relation to LRU (lease recently used) paging is characterized. The independent-reference model, in which page references are statistically independent, is used to assess the effects of interpage dependencies on working-set size observations. Under general assumptions, working-set size is shown to be normally distributed.", "Probably the most basic reason behind the absence of a general treatment of resource allocation in modern computer systems is an adequate model for program behavior. In this paper a new model, the “working set model,” is developed. The working set of pages associated with a process, defined to be the collection of its most recently used pages, provides knowledge vital to the dynamic management of paged memories. “Process” and “working set” are shown to be manifestations of the same ongoing computational activity; then “processor demand” and “memory demand” are defined; and resource allocation is formulated as the problem of balancing demands against available equipment." ] }
1504.03004
2949344860
The Locator ID Separation Protocol (LISP) limits the growth of the Default-Free Zone routing tables by creating a highly aggregatable and quasi-static Internet core. However, LISP pushes the forwarding state to edge routers whose timely operation relies on caching of location to identity bindings. In this paper we develop an analytical model to study the asymptotic scalability of the LISP cache. Under the assumptions that (i) long-term popularity can be modeled as a constant Generalized Zipf distribution and (ii) temporal locality is predominantly determined by long-term popularity, we find that the scalability of the LISP cache is O(1) with respect to the amount of prefixes (Internet growth) and users (growth of the LISP site). We validate the model and discuss the accuracy of our assumptions using several one-day-long packet traces.
@cite_9 argued that empirical evidence indicates that Web requests popularity distribution is Zipf-like of exponent @math . Using this finding and the assumption that temporal locality is mainly induced through long-term popularity, they showed that the asymptotic miss rates of an LFU cache, as a function of the cache size, is a power law of exponent @math . In this paper we argue that GZipf with exponents greater than @math is a closer fit to real popularity distributions and obtain a more general LRU cache model. We further use the model to determine the scaling properties of the cache.
{ "cite_N": [ "@cite_9" ], "mid": [ "2112053513" ], "abstract": [ "This paper addresses two unresolved issues about Web caching. The first issue is whether Web requests from a fixed user community are distributed according to Zipf's (1929) law. The second issue relates to a number of studies on the characteristics of Web proxy traces, which have shown that the hit-ratios and temporal locality of the traces exhibit certain asymptotic properties that are uniform across the different sets of the traces. In particular, the question is whether these properties are inherent to Web accesses or whether they are simply an artifact of the traces. An answer to these unresolved issues will facilitate both Web cache resource planning and cache hierarchy design. We show that the answers to the two questions are related. We first investigate the page request distribution seen by Web proxy caches using traces from a variety of sources. We find that the distribution does not follow Zipf's law precisely, but instead follows a Zipf-like distribution with the exponent varying from trace to trace. Furthermore, we find that there is only (i) a weak correlation between the access frequency of a Web page and its size and (ii) a weak correlation between access frequency and its rate of change. We then consider a simple model where the Web accesses are independent and the reference probability of the documents follows a Zipf-like distribution. We find that the model yields asymptotic behaviour that are consistent with the experimental observations, suggesting that the various observed properties of hit-ratios and temporal locality are indeed inherent to Web accesses observed by proxies. Finally, we revisit Web cache replacement algorithms and show that the algorithm that is suggested by this simple model performs best on real trace data. The results indicate that while page requests do indeed reveal short-term correlations and other structures, a simple model for an independent request stream following a Zipf-like distribution is sufficient to capture certain asymptotic properties observed at Web proxies." ] }
1504.03004
2949344860
The Locator ID Separation Protocol (LISP) limits the growth of the Default-Free Zone routing tables by creating a highly aggregatable and quasi-static Internet core. However, LISP pushes the forwarding state to edge routers whose timely operation relies on caching of location to identity bindings. In this paper we develop an analytical model to study the asymptotic scalability of the LISP cache. Under the assumptions that (i) long-term popularity can be modeled as a constant Generalized Zipf distribution and (ii) temporal locality is predominantly determined by long-term popularity, we find that the scalability of the LISP cache is O(1) with respect to the amount of prefixes (Internet growth) and users (growth of the LISP site). We validate the model and discuss the accuracy of our assumptions using several one-day-long packet traces.
Jin and Bestavros showed in @cite_24 that the inter-reference distribution is mainly determined by the the long-term popularity and only marginally by short-term correlations. They also proved that the inter-reference distribution of a reference string with Zipf-like popularity distribution is proportional to @math . We build upon their work but also extend their results by both considering a GZipf popularity distribution and by using them to deduce an LRU cache model.
{ "cite_N": [ "@cite_24" ], "mid": [ "2136719353" ], "abstract": [ "Temporal locality of reference in Web request streams emerges from two distinct phenomena: the long-term popularity of Web documents and the short-term temporal correlations of references. We show that the commonly-used distribution of inter-request times is predominantly determined by the power law governing the long-term popularity of documents. This inherent relationship tends to disguise the existence of short-term temporal correlations. We propose a new and robust metric that enables accurate characterization of that aspect of temporal locality. Using this metric, we characterize the locality of reference in a number of representative proxy cache traces. Our findings show that there are measurable differences between the degrees (and sources) of temporal locality across these traces." ] }
1504.03004
2949344860
The Locator ID Separation Protocol (LISP) limits the growth of the Default-Free Zone routing tables by creating a highly aggregatable and quasi-static Internet core. However, LISP pushes the forwarding state to edge routers whose timely operation relies on caching of location to identity bindings. In this paper we develop an analytical model to study the asymptotic scalability of the LISP cache. Under the assumptions that (i) long-term popularity can be modeled as a constant Generalized Zipf distribution and (ii) temporal locality is predominantly determined by long-term popularity, we find that the scalability of the LISP cache is O(1) with respect to the amount of prefixes (Internet growth) and users (growth of the LISP site). We validate the model and discuss the accuracy of our assumptions using several one-day-long packet traces.
In the field of route caching, Feldmeier @cite_19 and Jain @cite_7 were among the first to evaluate the possibility of performing destination address caching by leveraging the locality of traffic in network environments. Feldmeier found that locality could be exploited to reduce routing table lookup times on a gateway router while Jain, discovered that deterministic protocol behavior limits the benefits of locality for small caches. The works, though fundamental, bear no practical relevance today as they were carried two decades ago, a time when the Internet was still in its infancy.
{ "cite_N": [ "@cite_19", "@cite_7" ], "mid": [ "2106422605", "1984629408" ], "abstract": [ "A way to increase gateway throughput is to reduce the routing-table lookup time per packet. A routing-table cache can be used to reduce the average lookup time per packet and the purpose of this study is to determine the best management policies for this cache as well as its measured performance. The performance results of simulated caches for a gateway at MIT are presented. These results include the probability of reference versus previous access time, cache hit ratios, and the number of packets between cache misses. A simple, conservative analysis using the presented measurements shows that current gateway routing-table lookup time could be reduced by up to 65 . >", "Abstract The size of computer networks, along with their bandwidths, is growing exponentially. To support these large, high-speed networks, it is necessary to be able to forward packets in a few microseconds. One part of the forwarding operation consists of searching through a large address database. This problem is encountered in the design of adapters, bridges, routers, gateways, and name servers. Caching can reduce the lookup time if there is a locality in the address reference pattern. Using a destination reference trace measured on an extended local area network, we attempt to see if the destination references do have a significant locality. We compared the performance of MIN, LRU, FIFO, and random cache replacement algorithms. We found that the interactive (terminal) traffic in our sample had a quite different locality behavior than that of the noninteractive traffic. The interactive traffic did not follow the LRU stack model while the noninteractive traffic did. Examples are shown of the environments in which caching can help as well as those in which caching can hurt, unless the cache size is large." ] }
1504.03004
2949344860
The Locator ID Separation Protocol (LISP) limits the growth of the Default-Free Zone routing tables by creating a highly aggregatable and quasi-static Internet core. However, LISP pushes the forwarding state to edge routers whose timely operation relies on caching of location to identity bindings. In this paper we develop an analytical model to study the asymptotic scalability of the LISP cache. Under the assumptions that (i) long-term popularity can be modeled as a constant Generalized Zipf distribution and (ii) temporal locality is predominantly determined by long-term popularity, we find that the scalability of the LISP cache is O(1) with respect to the amount of prefixes (Internet growth) and users (growth of the LISP site). We validate the model and discuss the accuracy of our assumptions using several one-day-long packet traces.
Recently, @cite_13 performed a measurement study within the operational confinement of an ISP's network and showed the feasibility of route caching. They show by means of an experimental evaluation that LRU cache eviction policy performs close to optimal and better than LFU. Also, they found that prefix popularity distribution is very skewed and that working-set size is generally stable with time. These are in line with our empirical findings and provide practical confirmation for our assumption that the popularity distribution can be described as a GZipf.
{ "cite_N": [ "@cite_13" ], "mid": [ "2141062513" ], "abstract": [ "Internet routers' forwarding tables (FIBs), which must be stored in expensive fast memory for high-speed packet forwarding, are growing quickly in size due to increased multihoming, finer-grained traffic engineering, and deployment of IPv6 and VPNs. To address this problem, several Internet architectures have been proposed to reduce FIB size by returning to the earlier approach of route caching : storing only the working set of popular routes in the FIB. This paper revisits route caching. We build upon previous work by studying flat, uni-class ( 24) prefix caching, with modern traffic traces from more than 60 routers in a tier-1 ISP. We first characterize routers' working sets and then evaluate route-caching performance under different cache replacement strategies and cache sizes. Surprisingly, despite the large number of deaggregated 24 subnets, caching uni-class prefixes can effectively curb the increase of FIB sizes. Moreover, uni-class prefixes substantially simplify a cache design by eliminating longest-prefix matching, enabling FIB design with slower memory technologies. Finally, by comparing our results with previous work, we show that the distribution of traffic across prefixes is becoming increasingly skewed, making route caching more appealing." ] }
1504.03004
2949344860
The Locator ID Separation Protocol (LISP) limits the growth of the Default-Free Zone routing tables by creating a highly aggregatable and quasi-static Internet core. However, LISP pushes the forwarding state to edge routers whose timely operation relies on caching of location to identity bindings. In this paper we develop an analytical model to study the asymptotic scalability of the LISP cache. Under the assumptions that (i) long-term popularity can be modeled as a constant Generalized Zipf distribution and (ii) temporal locality is predominantly determined by long-term popularity, we find that the scalability of the LISP cache is O(1) with respect to the amount of prefixes (Internet growth) and users (growth of the LISP site). We validate the model and discuss the accuracy of our assumptions using several one-day-long packet traces.
Several works have previously looked at cache performance in loc id split scenarios considering LISP as a reference implementation. @cite_3 performed an initial trace driven study of the LISP map-cache performance while @cite_17 have both extended and confirmed the previous results with the help of a larger, ISP trace. @cite_22 performed a trace based Loc ID mapping cache performance analysis assuming a LRU eviction policy and using traffic captured at two egressing links of the China Education and Research Network backbone network. Although methodologies differ between the different papers, in all cases the observed LISP cache miss rates were found to be relatively small. This, again, indirectly confirms the skewness of the popularity distribution and its stability at least for short time scales.
{ "cite_N": [ "@cite_22", "@cite_3", "@cite_17" ], "mid": [ "2104128824", "2150772711", "1558782701" ], "abstract": [ "Challenges of routing scalability has attracted many research efforts, represented by the works of splitting identifier and locator semantics of IP addresses. A group of identifier-locator (ID Loc) split approaches is commonly featured with a mapping query service system, independent of the routing infrastructure. This is a significant change of the Internet routing architecture that deserves comprehensive analysis and quantitative evaluations. Focusing on the mostly concerned performance issues, we present a canonical model as the typical case where the mapping query is executed for routers performing the ID Loc mappings, and then evaluate the behaviors of caching, query retrieval and queueing introduced by the query latency. According to the results, a well-defined mapping service is able to handle the traffic volume that a current big provider may experience. Furthermore, we also suggest an end system modification to get better performance in the age of ID Loc having been split.", "Very recent activities in the IETF and in the Routing Research Group (RRG) of the IRTG focus on defining a new Internet architecture, in order to solve scalability issues related to interdo-main routing. The approach that is being explored is based on the separation of the end-systems' addressing space (the identifiers) and the routing locators' space. This separation is meant to alleviate the routing burden of the Default Free Zone, but it implies the need of distributing and storing mappings between identifiers and locators on caches placed on routers. In this paper we evaluate the cost of maintaining these caches when the distribution mechanism is based on a pull model. Taking as a reference the LISP protocol, we base our evaluation on real Netflow traces collected on the border router of our campus network. We thoroughly analyze the impact of the locator ID separation, and related cost, showing that there is a trade-off between the dynamism of the mapping distribution protocol, the demand in terms of bandwidth, and the size of the caches.", "Due to scalability issues that the current Internet is facing, the research community has re-discovered the Locator ID Split paradigm. As the name suggests, this paradigm is based on the idea of separating the identity from the location of end-systems, in order to increase the scalability of the Internet architecture. One of the most successful proposals, currently under discussion at the IETF, is LISP (Locator ID Separation Protocol). A critical component of LISP, from a performance and resources consumption perspective, as well as from a security point of view, is the LISP Cache. The LISP Cache is meant to temporarily store mappings, i.e., the bindings between identifiers and locations, in order to provide routers with the knowledge of where to forward packets. This paper presents a thorough analysis of such a component, based on real packet-level traces. Furthermore, the implications of policies to increase the level of security of LISP are also analyzed. Our results prove that even a timeout as short as 60 seconds provides high hit ratio and that the impact of using security policies is small." ] }
1504.03004
2949344860
The Locator ID Separation Protocol (LISP) limits the growth of the Default-Free Zone routing tables by creating a highly aggregatable and quasi-static Internet core. However, LISP pushes the forwarding state to edge routers whose timely operation relies on caching of location to identity bindings. In this paper we develop an analytical model to study the asymptotic scalability of the LISP cache. Under the assumptions that (i) long-term popularity can be modeled as a constant Generalized Zipf distribution and (ii) temporal locality is predominantly determined by long-term popularity, we find that the scalability of the LISP cache is O(1) with respect to the amount of prefixes (Internet growth) and users (growth of the LISP site). We validate the model and discuss the accuracy of our assumptions using several one-day-long packet traces.
Finally, in @cite_21 we devised an analytical model for the LISP cache size starting from empirical average working-set curves, using the working-set theory. Our goal was to model the influence of locality on cache miss rates whereas here, we look to understand how cache performance scales with respect to defining parameters, that is, the popularity distribution, the size of the LISP site and the size of the EID space, of network traffic.
{ "cite_N": [ "@cite_21" ], "mid": [ "82308599" ], "abstract": [ "Concerns regarding the scalability of the inter-domain routing have encouraged researchers to start elaborating a more robust Internet architecture. While consensus on the exact form of the solution is yet to be found, the need for a semantic decoupling of a node's location and identity is generally accepted as the only way forward. One of the most successful proposals to follow this guideline is LISP (Loc ID Separation Protocol). Design wise, its aim is to insulate the Internet's core routing state from the dynamics of edge networks. However, this requires the introduction of a mapping system, a distributed database, that should provide the binding of the two resulting namespaces. In order to avoid frequent lookups and not to penalize the speed of packet forwarding, map-caches that store temporal bindings are provisioned in routers. In this paper, we rely on the working-set theory to build a model that accurately predicts a map-cache's performance for traffic with time translation invariance of the working-set size. We validate our model empirically using four different packet traces collected in two different campus networks." ] }
1504.02609
1886703948
The advent of software defined networking enables flexible, reliable and feature-rich control planes for data center networks. However, the tight coupling of centralized control and complete visibility leads to a wide range of issues among which scalability has risen to prominence. To address this, we present LazyCtrl, a novel hybrid control plane design for data center networks where network control is carried out by distributed control mechanisms inside independent groups of switches while complemented with a global controller. Our design is motivated by the observation that data center traffic is usually highly skewed and thus edge switches can be grouped according to traffic locality. LazyCtrl aims at bringing laziness to the global controller by dynamically devolving most of the control tasks to independent switch groups to process frequent intra-group events near datapaths while handling rare inter-group or other specified events by the controller. We implement LazyCtrl and build a prototype based on Open vSwich and Floodlight. Trace-driven experiments on our prototype show that an effective switch grouping is easy to maintain in multi-tenant clouds and the central controller can be significantly shielded by staying lazy, with its workload reduced by up to 82 .
There has been a large body of work falling in this category. SEATTLE @cite_32 simplifies network management by flat addressing while providing hash-based resolution of host information (using a one-hop DHT) to ensure scalability. VL2 @cite_15 implements a layer 2.5 stack on hosts and uses IP-in-IP encapsulation to deliver packets. PortLand @cite_22 assigns Pseudo MAC (PMAC) addresses to all end hosts to enable efficient, provably loop-free forwarding with small switch state while leveraging a central fabric manager to address IP to PMAC translation in multi-rooted tree networks. NetLord @cite_30 employs a light-weight agent in the end-host hypervisors to encapsulate and transmit packets over an underlying, multi-path L2 network, using an unusual combination of IP and Ethernet packet headers.
{ "cite_N": [ "@cite_30", "@cite_15", "@cite_32", "@cite_22" ], "mid": [ "2119246371", "", "2117235019", "2123016589" ], "abstract": [ "Providers of \"Infrastructure-as-a-Service\" need datacenter networks that support multi-tenancy, scale, and ease of operation, at low cost. Most existing network architectures cannot meet all of these needs simultaneously. In this paper we present NetLord, a novel multi-tenant network architecture. NetLord provides tenants with simple and flexible network abstractions, by fully and efficiently virtualizing the address space at both L2 and L3. NetLord can exploit inexpensive commodity equipment to scale the network to several thousands of tenants and millions of virtual machines. NetLord requires only a small amount of offline, one-time configuration. We implemented NetLord on a testbed, and demonstrated its scalability, while achieving order-of-magnitude goodput improvements over previous approaches.", "", "IP networks today require massive effort to configure and manage. Ethernet is vastly simpler to manage, but does not scale beyond small local area networks. This paper describes an alternative network architecture called SEATTLE that achieves the best of both worlds: The scalability of IP combined with the simplicity of Ethernet. SEATTLE provides plug-and-play functionality via flat addressing, while ensuring scalability and efficiency through shortest-path routing and hash-based resolution of host information. In contrast to previous work on identity-based routing, SEATTLE ensures path predictability and stability, and simplifies network management. We performed a simulation study driven by real-world traffic traces and network topologies, and used Emulab to evaluate a prototype of our design based on the Click and XORP open-source routing platforms. Our experiments show that SEATTLE efficiently handles network failures and host mobility, while reducing control overhead and state requirements by roughly two orders of magnitude compared with Ethernet bridging.", "This paper considers the requirements for a scalable, easily manageable, fault-tolerant, and efficient data center network fabric. Trends in multi-core processors, end-host virtualization, and commodities of scale are pointing to future single-site data centers with millions of virtual end points. Existing layer 2 and layer 3 network protocols face some combination of limitations in such a setting: lack of scalability, difficult management, inflexible communication, or limited support for virtual machine migration. To some extent, these limitations may be inherent for Ethernet IP style protocols when trying to support arbitrary topologies. We observe that data center networks are often managed as a single logical network fabric with a known baseline topology and growth model. We leverage this observation in the design and implementation of PortLand, a scalable, fault tolerant layer 2 routing and forwarding protocol for data center environments. Through our implementation and evaluation, we show that PortLand holds promise for supporting a plug-and-play\" large-scale, data center network." ] }
1504.02605
2949196242
For both the Lempel Ziv 77- and 78-factorization we propose algorithms generating the respective factorization using @math bits (for any positive constant @math ) working space (including the space for the output) for any text of size @math over an integer alphabet in @math time.
LZ78, by using a naive trie implementation, the factorization is computable with @math bits space and @math overall running time, where @math is the size of LZ78 factorization. More sophisticated trie implementations @cite_8 improve this to @math time using the same space.
{ "cite_N": [ "@cite_8" ], "mid": [ "1884472997" ], "abstract": [ "We consider finding a pattern of length (m ) in a compacted (linear-size) trie storing strings over an alphabet of size ( ). In static tries, we achieve (O(m+ ) ) deterministic time, whereas in dynamic tries we achieve (O(m+ ^ 2 ) ) deterministic time per query or update. One particular application of the above bounds (static and dynamic) are suffix trees, where we also show how to pre- or append letters in (O( n + ^ 2 ) ) time. Our main technical contribution is a weighted variant of exponential search trees, which might be of independent interest." ] }
1504.02935
1462541646
We develop a new method for large-scale frequentist multiple testing with Bayesian prior information. We find optimal @math -value weights that maximize the average power of the weighted Bonferroni method. Due to the nonconvexity of the optimization problem, previous methods that account for uncertain prior information are suitable for only a small number of tests. For a Gaussian prior on the effect sizes, we give an efficient algorithm that is guaranteed to find the optimal weights nearly exactly. Our method can discover new loci in genome-wide association studies and compares favourably to competitors. An open-source implementation is available.
A missing ingredient is taking uncertainty into account. @cite_1 considered a Gaussian model @math for hypothesis testing where prior distributions are known for the means. However, their optimization methods could handle only a small number ( @math ) of tests.
{ "cite_N": [ "@cite_1" ], "mid": [ "1997559526" ], "abstract": [ "We maximize power in a replicated clinical trial involving multiple endpoints by adjusting the individual significance levels for each hypothesis, using preliminary data to obtain the optimal adjustments. The levels are constrained to control the familywise error rate. Power is defined as the expected number of significances, where expectations are taken with respect to the posterior distributions of the non-centrality parameters under non-informative priors. Sample size requirements for the replicate study are given. Intuitive principles such as downweighting insignificant variables from a preliminary study and giving primary endpoints more emphasis are justifiable within the conceptual framework. © 1998 John Wiley & Sons, Ltd." ] }
1504.02687
1958293277
This paper presents a graph bundling algorithm that agglomerates edges taking into account both spatial proximity as well as user-defined criteria in order to reveal patterns that were not perceivable with previous bundling techniques. Each edge belongs to a group that may either be an input of the problem or found by clustering one or more edge properties such as origin, destination, orientation, length or domain-specific properties. Bundling is driven by a stack of density maps, with each map capturing both the edge density of a given group as well as interactions with edges from other groups. Density maps are efficiently calculated by smoothing 2D histograms of edge occurrence using repeated averaging filters based on integral images. A CPU implementation of the algorithm is tested on several graphs, and different grouping criteria are used to illustrate how the proposed technique can render different visualizations of the same data. Bundling performance is much higher than on previous approaches, being particularly noticeable on large graphs, with millions of edges being bundled in seconds.
Holten coined the term edge bundling when he proposed Hierarchical Edge Bundling (HEB) for compound (hierarchy-and-association) graphs @cite_1 . HEB bundles edges using B-splines, following the control points defined by the hierarchy. Gansner and Koren bundle edges in a circular node layouts by merging edges so that the resulting splines share some control points while minimizing the total amount of ink needed to draw the edges @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_1" ], "mid": [ "1577253155", "2145640629" ], "abstract": [ "Circular graph layout is a drawing scheme where all nodes are placed on the perimeter of a circle. An inherent issue with circular layouts is that the rigid restriction on node placement often gives rise to long edges and an overall dense drawing. We suggest here three independent, complementary techniques for lowering the density and improving the readability of circular layouts. First, a new algorithm is given for placing the nodes on the circle such that edge lengths are reduced. Second, we enhance the circular drawing style by allowing some of the edges to be routed around the exterior of the circle. This is accomplished with an algorithm for optimally selecting such a set of externally routed edges. The third technique reduces density by coupling groups of edges as bundled splines that share part of their route. Together, these techniques are able to reduce clutter, density and crossings compared with existing methods.", "A compound graph is a frequently encountered type of data set. Relations are given between items, and a hierarchy is defined on the items as well. We present a new method for visualizing such compound graphs. Our approach is based on visually bundling the adjacency edges, i.e., non-hierarchical edges, together. We realize this as follows. We assume that the hierarchy is shown via a standard tree visualization method. Next, we bend each adjacency edge, modeled as a B-spline curve, toward the polyline defined by the path via the inclusion edges from one node to another. This hierarchical bundling reduces visual clutter and also visualizes implicit adjacency edges between parent nodes that are the result of explicit adjacency edges between their respective child nodes. Furthermore, hierarchical edge bundling is a generic method which can be used in conjunction with existing tree visualization techniques. We illustrate our technique by providing example visualizations and discuss the results based on an informal evaluation provided by potential users of such visualizations" ] }
1504.02687
1958293277
This paper presents a graph bundling algorithm that agglomerates edges taking into account both spatial proximity as well as user-defined criteria in order to reveal patterns that were not perceivable with previous bundling techniques. Each edge belongs to a group that may either be an input of the problem or found by clustering one or more edge properties such as origin, destination, orientation, length or domain-specific properties. Bundling is driven by a stack of density maps, with each map capturing both the edge density of a given group as well as interactions with edges from other groups. Density maps are efficiently calculated by smoothing 2D histograms of edge occurrence using repeated averaging filters based on integral images. A CPU implementation of the algorithm is tested on several graphs, and different grouping criteria are used to illustrate how the proposed technique can render different visualizations of the same data. Bundling performance is much higher than on previous approaches, being particularly noticeable on large graphs, with millions of edges being bundled in seconds.
proposed Geometry Based Edge Bundling (GBEB) @cite_0 , one of the first methods suitable for bundling general undirected layouts. Bundling is performed by forcing edges to pass through the same control points of a control mesh. Holten and van Wijk used principles of physics to attract edges that are close to each other in their Force-directed edge bundling (FDEB) algorithm @cite_28 . Bundled graphs were considered to be smoother and easier to read than on previous approaches, but the computational complexity of the algorithm is high, making it slower than GBEB. FDEB was later extended to separate opposite-direction bundles in directed graphs in the Divided Edge Bundling (DEB) algorithm @cite_13 . Control meshes were revisited in the Wind Roads (WR) algorithm proposed by @cite_9 . In WR, graph edges are routed along mesh edges using a shortest path algorithm. The computational performance of the algorithm was better than FDEB and comparable to GBEB. Later, extended the algorithm to 3D @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_28", "@cite_9", "@cite_0", "@cite_13" ], "mid": [ "2000689712", "2117088188", "2068261792", "2148056538", "2132907133" ], "abstract": [ "Visualization of graphs containing many nodes and edges efficiently is quite challenging since representations generally suffer from visual clutter induced by the large amount of edge crossings and node-edge overlaps. That problem becomes even more important when nodes positions are fixed, such as in geography were nodes positions are set according to geographical coordinates. Edge bundling techniques can help to solve this issue by visually merging edges along common routes but it can also help to reveal high-level edge patterns in the network and therefore to understand its overall organization. In this paper, we present a generalization of [18] to reduce the clutter in a 3D representation by routing edges into bundles as well as a GPU-based rendering method to emphasize bundles densities while preserving edge color. To visualize geographical networks in the context of the globe, we also provide a new technique allowing to bundle edges around and not across it.", "Graphs depicted as node-link diagrams are widely used to show relationships between entities. However, nodelink diagrams comprised of a large number of nodes and edges often suffer from visual clutter. The use of edge bundling remedies this and reveals high-level edge patterns. Previous methods require the graph to contain a hierarchy for this, or they construct a control mesh to guide the edge bundling process, which often results in bundles that show considerable variation in curvature along the overall bundle direction. We present a new edge bundling method that uses a self-organizing approach to bundling in which edges are modeled as flexible springs that can attract each other. In contrast to previous methods, no hierarchy is used and no control mesh. The resulting bundled graphs show significant clutter reduction and clearly visible high-level edge patterns. Curvature variation is furthermore minimized, resulting in smooth bundles that are easy to follow. Finally, we present a rendering technique that can be used to emphasize the bundling.", "Visualizing graphs containing many nodes and edges efficiently is quite challenging. Drawings of such graphs generally suffer from visual clutter induced by the large amount of edges and their crossings. Consequently, it is difficult to read the relationships between nodes and the high-level edge patterns that may exist in standard nodelink diagram representations. Edge bundling techniques have been proposed to help solve this issue, which rely on high quality edge rerouting. In this paper, we introduce an intuitive edge bundling technique which efficiently reduces edge clutter in graphs drawings. Our method is based on the use of a grid built using the original graph to compute the edge rerouting. In comparison with previously proposed edge bundling methods, our technique improves both the level of clutter reduction and the computation performance. The second contribution of this paper is a GPU-based rendering method which helps users perceive bundles densities while preserving edge color.", "Graphs have been widely used to model relationships among data. For large graphs, excessive edge crossings make the display visually cluttered and thus difficult to explore. In this paper, we propose a novel geometry-based edge-clustering framework that can group edges into bundles to reduce the overall edge crossings. Our method uses a control mesh to guide the edge-clustering process; edge bundles can be formed by forcing all edges to pass through some control points on the mesh. The control mesh can be generated at different levels of detail either manually or automatically based on underlying graph patterns. Users can further interact with the edge-clustering results through several advanced visualization techniques such as color and opacity enhancement. Compared with other edge-clustering methods, our approach is intuitive, flexible, and efficient. The experiments on some large graphs demonstrate the effectiveness of our method.", "The node-link diagram is an intuitive and venerable way to depict a graph. To reduce clutter and improve the readability of node-link views, Holten & van Wijk's force-directed edge bundling employs a physical simulation to spatially group graph edges. While both useful and aesthetic, this technique has shortcomings: it bundles spatially proximal edges regardless of direction, weight, or graph connectivity. As a result, high-level directional edge patterns are obscured. We present divided edge bundling to tackle these shortcomings. By modifying the forces in the physical simulation, directional lanes appear as an emergent property of edge direction. By considering graph topology, we only bundle edges related by graph structure. Finally, we aggregate edge weights in bundles to enable more accurate visualization of total bundle weights. We compare visualizations created using our technique to standard force-directed edge bundling, matrix diagrams, and clustered graphs; we find that divided edge bundling leads to visualizations that are easier to interpret and reveal both familiar and previously obscured patterns." ] }
1504.02687
1958293277
This paper presents a graph bundling algorithm that agglomerates edges taking into account both spatial proximity as well as user-defined criteria in order to reveal patterns that were not perceivable with previous bundling techniques. Each edge belongs to a group that may either be an input of the problem or found by clustering one or more edge properties such as origin, destination, orientation, length or domain-specific properties. Bundling is driven by a stack of density maps, with each map capturing both the edge density of a given group as well as interactions with edges from other groups. Density maps are efficiently calculated by smoothing 2D histograms of edge occurrence using repeated averaging filters based on integral images. A CPU implementation of the algorithm is tested on several graphs, and different grouping criteria are used to illustrate how the proposed technique can render different visualizations of the same data. Bundling performance is much higher than on previous approaches, being particularly noticeable on large graphs, with millions of edges being bundled in seconds.
In 2011, proposed a multilevel agglomerative edge bundling method (MINGLE) based on minimizing the ink needed to represent edges, with additional constraints on the curvature of the resulting splines @cite_16 . While drawings are more cluttered and less smooth than some of the previous techniques, MINGLE remains the fastest algorithm and the only that demonstrated being able to coupe with graphs comprised by millions of edges, although requiring several minutes to process them. proposed Ordered Bundles (OB) @cite_8 , where edge routing is accomplished through a heuristic that tries to minimize the total length of the paths together with their ink. After bundling, the method separates edges belonging to the same bundle to allow detailed local views. Computing times are higher than several of the previous approaches but, in the other hand, this technique offers a level of detail not previously achieved.
{ "cite_N": [ "@cite_16", "@cite_8" ], "mid": [ "1982799219", "1793674997" ], "abstract": [ "Graphs are often used to encapsulate relationships between objects. Node-link diagrams, commonly used to visualize graphs, suffer from visual clutter on large graphs. Edge bundling is an effective technique for alleviating clutter and revealing high-level edge patterns. Previous methods for general graph layouts either require a control mesh to guide the bundling process, which can introduce high variation in curvature along the bundles, or all-to-all force and compatibility calculations, which is not scalable. We propose a multilevel agglomerative edge bundling method based on a principled approach of minimizing ink needed to represent edges, with additional constraints on the curvature of the resulting splines. The proposed method is much faster than previous ones, able to bundle hundreds of thousands of edges in seconds, and one million edges in a few minutes.", "We propose a new approach to edge bundling. At the first stage we route the edge paths so as to minimize a weighted sum of the total length of the paths together with their ink. As this problem is NP-hard, we provide an efficient heuristic that finds an approximate solution. The second stage then separates edges belonging to the same bundle. To achieve this, we provide a new and efficient algorithm that solves a variant of the metro-line crossing minimization problem. The method creates aesthetically pleasing edge routes that give an overview of the global graph structure, while still drawing each edge separately, without intersecting graph nodes, and with few crossings." ] }
1504.02824
2952812058
Co-occurrence Data is a common and important information source in many areas, such as the word co-occurrence in the sentences, friends co-occurrence in social networks and products co-occurrence in commercial transaction data, etc, which contains rich correlation and clustering information about the items. In this paper, we study co-occurrence data using a general energy-based probabilistic model, and we analyze three different categories of energy-based model, namely, the @math , @math and @math models, which are able to capture different levels of dependency in the co-occurrence data. We also discuss how several typical existing models are related to these three types of energy models, including the Fully Visible Boltzmann Machine (FVBM) ( @math ), Matrix Factorization ( @math ), Log-BiLinear (LBL) models ( @math ), and the Restricted Boltzmann Machine (RBM) model ( @math ). Then, we propose a Deep Embedding Model (DEM) (an @math model) from the energy model in a manner. Furthermore, motivated by the observation that the partition function in the energy model is intractable and the fact that the major objective of modeling the co-occurrence data is to predict using the conditional probability, we apply the method to learn DEM. In consequence, the developed model and its learning method naturally avoid the above difficulties and can be easily used to compute the conditional probability in prediction. Interestingly, our method is equivalent to learning a special structured deep neural network using back-propagation and a special sampling strategy, which makes it scalable on large-scale datasets. Finally, in the experiments, we show that the DEM can achieve comparable or better results than state-of-the-art methods on datasets across several application domains.
DEM is also closely related to the autoencoder @cite_12 models, which contains two components: encoder and decoder. The encoder maps the input data to hidden states, while the decoder reconstructs the input data from the hidden states. There are also some studies to connect denoising auto encoder to generative learning @cite_22 @cite_17 @cite_15 . Indeed, DEM could be also viewed as an special case of denoising autoencoder. In encoder phase, the input data is corrupted by randomly dropping one element, and is then fed into the encoder function to generate the hierarchical latent embedding vectors. Then, in the decoding phase, the missing items are reconstructed from latent vectors afterwards.
{ "cite_N": [ "@cite_15", "@cite_22", "@cite_12", "@cite_17" ], "mid": [ "2191540403", "2953267151", "2025768430", "2013035813" ], "abstract": [ "We consider estimation methods for the class of continuous-data energy based models (EBMs). Our main result shows that estimating the parameters of an EBM using score matching when the conditional distribution over the visible units is Gaussian corresponds to training a particular form of regularized autoencoder. We show how different Gaussian EBMs lead to different autoencoder architectures, providing deep links between these two families of models. We compare the score matching estimator for the mPoT model, a particular Gaussian EBM, to several other training methods on a variety of tasks including image denoising and unsupervised feature extraction. We show that the regularization function induced by score matching leads to superior classification performance relative to a standard autoencoder. We also show that score matching yields classification results that are indistinguishable from better-known stochastic approximation maximum likelihood estimators.", "Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying data-generating distribution when the data are discrete, or using other forms of corruption process and reconstruction errors. Another issue is the mathematical justification which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty).", "Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.", "Denoising autoencoders have been previously shown to be competitive alternatives to restricted Boltzmann machines for unsupervised pretraining of each layer of a deep architecture. We show that a simple denoising autoencoder training criterion is equivalent to matching the score (with respect to the data) of a specific energy-based model to that of a nonparametric Parzen density estimator of the data. This yields several useful insights. It defines a proper probabilistic model for the denoising autoencoder technique, which makes it in principle possible to sample from them or rank examples by their energy. It suggests a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives. It justifies the use of tied weights between the encoder and decoder and suggests ways to extend the success of denoising autoencoders to a larger family of energy-based models." ] }
1504.02358
2017114866
We have designed and implemented an application running inside Second Life that supports user annotation of graphical objects and graphical visualization of concept ontologies, thus providing a formal, machine-accessible description of objects. As a result, we offer a platform that combines the graphical knowledge representation that is expected from a MUVE artifact with the semantic structure given by the Resource Framework Description (RDF) representation of information.
Since its appearance Second Life attracted a lot of attention from users and several research efforts have aimed at coupling virtual environment with semantic web applications running on web servers external to SL. Karakatsiotis et @cite_18 @cite_0 developed a robotic avatar (perhaps better termed a bot) which follows a virtual visitor in her tour of the virtual museum. Whenever the visitor touches an exhibit the bot produces a textual description developed in NaturalOwl, a Natural Language Generation engine developed in Java. A more scalable version of this work is described in @cite_10 .
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_10" ], "mid": [ "1997278316", "2883369594", "2128335999" ], "abstract": [ "We demonstrate an open-source natural language generation engine that produces descriptions of entities and classes in English and Greek from OWL ontologies that have been annotated with linguistic and user modeling information expressed in RDF. We also demonstrate an accompanying plug-in for the Protege ontology editor, which can be used to create the ontology's annotations and generate previews of the resulting texts by invoking the generation engine. The engine has been embedded in robots acting as museum tour guides in the physical world and in Second Life; here we demonstrate the latter application.", "", "We describe initial work on building a virtual gallery, within Second Life, which can automatically tailor itself to an individual visitor, responding to their abilities, interests, preferences or history of interaction. The description of an object in the virtual world can be personalised to suit the beginner or the expert, varying how it is said—via the choice of language (such as English or Greek), the words, or the complexity of sentences—as well as what is said—by taking into account what else has been seen or described already. The guide delivering the descriptions can remain disembodied, or be embodied as a robotic avatar." ] }
1504.02358
2017114866
We have designed and implemented an application running inside Second Life that supports user annotation of graphical objects and graphical visualization of concept ontologies, thus providing a formal, machine-accessible description of objects. As a result, we offer a platform that combines the graphical knowledge representation that is expected from a MUVE artifact with the semantic structure given by the Resource Framework Description (RDF) representation of information.
A somewhat different perspective was pursued by @cite_5 , who designed a tool, based on Ajax3D and that was mainly intended for X3D applications, for the semantic annotation and creation of navigation paths in virtual environments. Specific to the Second Life virtual environment is the work of @cite_7 . Their approach allows users to tag her own virtual objects. It is also possible to associate a piece of software, a so-called , to a virtual object thus allowing other users to freely tag it.
{ "cite_N": [ "@cite_5", "@cite_7" ], "mid": [ "1605908173", "30127466" ], "abstract": [ "Nowadays, more Virtual Environments (VEs) are becoming available on the Web. This means that VEs are becoming accessible to a larger and more diverse audience. It also means that it is more likely that the use of these VEs (i.e. how to interact with the virtual environment and the meanings of the associated virtual objects) may be different for different groups of persons. In order for a VE to be a success on the Web, end-users should easily get familiar with the VE and understand the meanings of its virtual objects. Otherwise, the end-user may be tempted to quit the VE. Therefore, annotations and the creation of navigation paths for virtual tour guides become important to ease the use of VEs. Most of the time, this is done by VR-experts and the annotations are very poor and often only text based. This paper describes an approach and associated tool that allows a layman to add or update annotations to existing VEs. In addition, annotations are not limited to text but may also be multimedia elements, i.e. images, videos, sounds. Furthermore, the approach (and the tool) also allows easy creation of navigation paths and tour guides, which can be used to adapt a VE to the needs of a user. The paper illustrates the results by means of a real case, which is a reconstruction of a coalmine site for a museum.", "We present semSL, an approach to bring Semantic Web technologies into Second Life. Second Life is a virtual 3D world, in which users can communicate, build objects, and explore the land of other users. There are different kinds of entities in Second Life, which can be locations, objects, or events. Many of these entities are of potential interest to users. However, searching for entities is difficult in Second Life, since there is only a very limited way to describe entities. With semSL it becomes possible for every user to add arbitrary tags or key value pair based descriptions to entities in Second Life, or to create typed links between entities. Such typed links can even be established between entities in Second Life and resources from the Semantic Web. The description data for all such entities is centrally stored at a server external to Second Life. The data is encoded in RDF, and is publicly accessible via a SPARQL endpoint. This should not only lead to significant improvements for searching operations, but will also allow for flexible data integration between data from semSL and data from other sources on the Semantic Web." ] }
1504.02358
2017114866
We have designed and implemented an application running inside Second Life that supports user annotation of graphical objects and graphical visualization of concept ontologies, thus providing a formal, machine-accessible description of objects. As a result, we offer a platform that combines the graphical knowledge representation that is expected from a MUVE artifact with the semantic structure given by the Resource Framework Description (RDF) representation of information.
Specific to the SL virtual environment is the work by @cite_7 : their approach allows users to tag their own virtual objects; it is also possible to associate a piece of software, a so-called to a virtual object so to allow other users to tag it.
{ "cite_N": [ "@cite_7" ], "mid": [ "30127466" ], "abstract": [ "We present semSL, an approach to bring Semantic Web technologies into Second Life. Second Life is a virtual 3D world, in which users can communicate, build objects, and explore the land of other users. There are different kinds of entities in Second Life, which can be locations, objects, or events. Many of these entities are of potential interest to users. However, searching for entities is difficult in Second Life, since there is only a very limited way to describe entities. With semSL it becomes possible for every user to add arbitrary tags or key value pair based descriptions to entities in Second Life, or to create typed links between entities. Such typed links can even be established between entities in Second Life and resources from the Semantic Web. The description data for all such entities is centrally stored at a server external to Second Life. The data is encoded in RDF, and is publicly accessible via a SPARQL endpoint. This should not only lead to significant improvements for searching operations, but will also allow for flexible data integration between data from semSL and data from other sources on the Semantic Web." ] }
1504.02044
2201340609
The Lovasz Local Lemma is a seminal result in probabilistic combinatorics. It gives a sufficient condition on a probability space and a collection of events for the existence of an outcome that simultaneously avoids all of those events. Finding such an outcome by an efficient algorithm has been an active research topic for decades. Breakthrough work of Moser and Tardos (2009) presented an efficient algorithm for a general setting primarily characterized by a product structure on the probability space. In this work we present an efficient algorithm for a much more general setting. Our main assumption is that there exist certain functions, called resampling oracles, that can be invoked to address the undesired occurrence of the events. We show that, in all scenarios to which the original Lovasz Local Lemma applies, there exist resampling oracles, although they are not necessarily efficient. Nevertheless, for essentially all known applications of the Lovasz Local Lemma and its generalizations, we have designed efficient resampling oracles. As applications of these techniques, we present new results for packings of Latin transversals, rainbow matchings and rainbow spanning trees.
The breakthrough work of Moser and Tardos @cite_28 @cite_29 stimulated a string of results on algorithms for the LLL. This section reviews the results that are most relevant to our work. Several interesting techniques play a role in the analyses of these previous algorithms. These can be roughly categorized as the @cite_19 @cite_5 , or @cite_29 @cite_3 @cite_9 and @cite_24 .
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_9", "@cite_3", "@cite_24", "@cite_19", "@cite_5" ], "mid": [ "", "2109693504", "2066969286", "2400716891", "1810595280", "2230347088", "2078116611" ], "abstract": [ "", "The Lovasz Local Lemma discovered by Erdős and Lovasz in 1975 is a powerful tool to non-constructively prove the existence of combinatorial objects meeting a prescribed collection of criteria. In 1991, Jozsef Beck was the first to demonstrate that a constructive variant can be given under certain more restrictive conditions, starting a whole line of research aimed at improving his algorithm's performance and relaxing its restrictions. In the present article, we improve upon recent findings so as to provide a method for making almost all known applications of the general Local Lemma algorithmic.", "Beck's early work [3] gave an efficient version of the Lovasz Local Lemma(LLL) with significant compromise in the parameters. Following several improvements [1,7,4,13], Moser [8], and Moser and Tardos [9] obtained asymptotically optimal results in terms of the maximal degree. For a fixed dependency graph G the exact criterion under which LLL applies is given by Shearer in [12]. For a dependency structure G, let LO(G) be the set of those probability assignments to the nodes of G for which the Lovasz Local Lemma holds. We show that: Both the sequential and parallel ersions of the Moser-Tardos algorithm are efficient up to the Shearer's bound, by giving a tighter analysis. We also prove that, whenever p ∈ LO(G) (1+e), the expected running times of the sequential and parallel versions are at most n e and O(1 e log n e), the later when e", "While there has been significant progress on algorithmic aspects of the Lovasz Local Lemma (LLL) in recent years, a noteworthy exception is when the LLL is used in the context of random permutations: the \"lopsided\" version of the LLL is usually at play here, and we do not yet have subexponential-time algorithms. We resolve this by developing a randomized polynomial-time algorithm for such applications. A noteworthy application is for Latin Transversals: the best-known general result here (, improving on Erdos and Spencer), states that any n x n matrix in which each entry appears at most (27 256)n times, has a Latin transversal. We present the first polynomial-time algorithm to construct such a transversal. Our approach also yields RNC algorithms: for Latin transversals, as well as the first efficient ones for the strong chromatic number and (special cases of) acyclic edge-coloring.", "The algorithm for Lovasz Local Lemma by Moser and Tardos gives a constructive way to prove the existence of combinatorial objects that satisfy a system of constraints. We present an alternative probabilistic analysis of the algorithm that does not involve reconstructing the history of the algorithm from the witness tree. We apply our technique to improve the best known upper bound to acyclic chromatic index. Specifically we show that a graph with maximum degree Δ has an acyclic proper edge coloring with at most ⌈3.74(Δ − 1)⌉ + 1 colors, whereas the previously known best bound was 4(Δ − 1). The same technique is also applied to improve corresponding bounds for graphs with bounded girth. An interesting aspect of this application is that the probability of the \"undesirable\" events do not have a uniform upper bound, i.e. it constitutes a case of the asymmetric Lovasz Local Lemma.", "", "A folklore result uses the Lovasz local lemma to analyze the discrepancy of hypergraphs with bounded degree and edge size. We generalize this result to the context of real matrices with bounded row and column sums." ] }
1504.02044
2201340609
The Lovasz Local Lemma is a seminal result in probabilistic combinatorics. It gives a sufficient condition on a probability space and a collection of events for the existence of an outcome that simultaneously avoids all of those events. Finding such an outcome by an efficient algorithm has been an active research topic for decades. Breakthrough work of Moser and Tardos (2009) presented an efficient algorithm for a general setting primarily characterized by a product structure on the probability space. In this work we present an efficient algorithm for a much more general setting. Our main assumption is that there exist certain functions, called resampling oracles, that can be invoked to address the undesired occurrence of the events. We show that, in all scenarios to which the original Lovasz Local Lemma applies, there exist resampling oracles, although they are not necessarily efficient. Nevertheless, for essentially all known applications of the Lovasz Local Lemma and its generalizations, we have designed efficient resampling oracles. As applications of these techniques, we present new results for packings of Latin transversals, rainbow matchings and rainbow spanning trees.
Following this, Moser and Tardos @cite_29 showed that a similar algorithm will produce a state in @math , assuming the independent variable model and the criterion. This paper is primarily responsible for the development of witness trees, and proved the witness tree lemma'', which yields an extremely elegant analysis in the variable model. The witness tree lemma has further implications. For example, it allows one to analyze separately for each event its expected number of resamplings. Moser and Tardos also extended the variable model to incorporate a limited form of lopsidependency, and showed that their analysis still holds in that setting.
{ "cite_N": [ "@cite_29" ], "mid": [ "2109693504" ], "abstract": [ "The Lovasz Local Lemma discovered by Erdős and Lovasz in 1975 is a powerful tool to non-constructively prove the existence of combinatorial objects meeting a prescribed collection of criteria. In 1991, Jozsef Beck was the first to demonstrate that a constructive variant can be given under certain more restrictive conditions, starting a whole line of research aimed at improving his algorithm's performance and relaxing its restrictions. In the present article, we improve upon recent findings so as to provide a method for making almost all known applications of the general Local Lemma algorithmic." ] }
1504.02044
2201340609
The Lovasz Local Lemma is a seminal result in probabilistic combinatorics. It gives a sufficient condition on a probability space and a collection of events for the existence of an outcome that simultaneously avoids all of those events. Finding such an outcome by an efficient algorithm has been an active research topic for decades. Breakthrough work of Moser and Tardos (2009) presented an efficient algorithm for a general setting primarily characterized by a product structure on the probability space. In this work we present an efficient algorithm for a much more general setting. Our main assumption is that there exist certain functions, called resampling oracles, that can be invoked to address the undesired occurrence of the events. We show that, in all scenarios to which the original Lovasz Local Lemma applies, there exist resampling oracles, although they are not necessarily efficient. Nevertheless, for essentially all known applications of the Lovasz Local Lemma and its generalizations, we have designed efficient resampling oracles. As applications of these techniques, we present new results for packings of Latin transversals, rainbow matchings and rainbow spanning trees.
The Moser-Tardos algorithm is known to terminate under criteria more general than , while still assuming the variable model. Pegden @cite_33 showed that the cluster expansion criterion suffices, whereas Kolipaka and Szegedy @cite_9 showed more generally that Shearer's criterion suffices. We also extend our analysis to the cluster expansion criterion as well as Shearer's criterion, in the more general context of . Our bounds on the number of resampling operations are somewhat weaker than those of @cite_33 @cite_9 , but the increase is at most quadratic.
{ "cite_N": [ "@cite_9", "@cite_33" ], "mid": [ "2066969286", "2963097814" ], "abstract": [ "Beck's early work [3] gave an efficient version of the Lovasz Local Lemma(LLL) with significant compromise in the parameters. Following several improvements [1,7,4,13], Moser [8], and Moser and Tardos [9] obtained asymptotically optimal results in terms of the maximal degree. For a fixed dependency graph G the exact criterion under which LLL applies is given by Shearer in [12]. For a dependency structure G, let LO(G) be the set of those probability assignments to the nodes of G for which the Lovasz Local Lemma holds. We show that: Both the sequential and parallel ersions of the Moser-Tardos algorithm are efficient up to the Shearer's bound, by giving a tighter analysis. We also prove that, whenever p ∈ LO(G) (1+e), the expected running times of the sequential and parallel versions are at most n e and O(1 e log n e), the later when e", "A recent theorem of [arXiv:0910.1824v2, 2010], proved using results about the cluster expansion in statistical mechanics, extends the Lovasz local lemma by weakening the conditions under which its conclusion holds. In this note, we prove an algorithmic analogue of this result, extending Moser and Tardos's recent algorithmic local lemma [J. ACM, 57 (2010), 11], and providing an alternative proof of the theorem of applicable in the Moser--Tardos algorithmic framework." ] }
1504.02044
2201340609
The Lovasz Local Lemma is a seminal result in probabilistic combinatorics. It gives a sufficient condition on a probability space and a collection of events for the existence of an outcome that simultaneously avoids all of those events. Finding such an outcome by an efficient algorithm has been an active research topic for decades. Breakthrough work of Moser and Tardos (2009) presented an efficient algorithm for a general setting primarily characterized by a product structure on the probability space. In this work we present an efficient algorithm for a much more general setting. Our main assumption is that there exist certain functions, called resampling oracles, that can be invoked to address the undesired occurrence of the events. We show that, in all scenarios to which the original Lovasz Local Lemma applies, there exist resampling oracles, although they are not necessarily efficient. Nevertheless, for essentially all known applications of the Lovasz Local Lemma and its generalizations, we have designed efficient resampling oracles. As applications of these techniques, we present new results for packings of Latin transversals, rainbow matchings and rainbow spanning trees.
Kolipaka and Szegedy @cite_9 present another algorithm, called GeneralizedResample, whose analysis proves the LLL under Shearer's condition for arbitrary probability spaces. GeneralizedResample is similar to in that they both work with abstract distributions and that they repeatedly choose a maximal independent set @math of undesired events to resample. However, the way that the bad events are resampled is different: GeneralizedResample needs to sample from @math , which is a complicated operation that seems difficult to implement efficiently. Thus can be viewed as a variant of GeneralizedResample that can be made efficient in all known scenarios.
{ "cite_N": [ "@cite_9" ], "mid": [ "2066969286" ], "abstract": [ "Beck's early work [3] gave an efficient version of the Lovasz Local Lemma(LLL) with significant compromise in the parameters. Following several improvements [1,7,4,13], Moser [8], and Moser and Tardos [9] obtained asymptotically optimal results in terms of the maximal degree. For a fixed dependency graph G the exact criterion under which LLL applies is given by Shearer in [12]. For a dependency structure G, let LO(G) be the set of those probability assignments to the nodes of G for which the Lovasz Local Lemma holds. We show that: Both the sequential and parallel ersions of the Moser-Tardos algorithm are efficient up to the Shearer's bound, by giving a tighter analysis. We also prove that, whenever p ∈ LO(G) (1+e), the expected running times of the sequential and parallel versions are at most n e and O(1 e log n e), the later when e" ] }
1504.02044
2201340609
The Lovasz Local Lemma is a seminal result in probabilistic combinatorics. It gives a sufficient condition on a probability space and a collection of events for the existence of an outcome that simultaneously avoids all of those events. Finding such an outcome by an efficient algorithm has been an active research topic for decades. Breakthrough work of Moser and Tardos (2009) presented an efficient algorithm for a general setting primarily characterized by a product structure on the probability space. In this work we present an efficient algorithm for a much more general setting. Our main assumption is that there exist certain functions, called resampling oracles, that can be invoked to address the undesired occurrence of the events. We show that, in all scenarios to which the original Lovasz Local Lemma applies, there exist resampling oracles, although they are not necessarily efficient. Nevertheless, for essentially all known applications of the Lovasz Local Lemma and its generalizations, we have designed efficient resampling oracles. As applications of these techniques, we present new results for packings of Latin transversals, rainbow matchings and rainbow spanning trees.
Harris and Srinivasan @cite_3 show that the Moser-Tardos algorithm can be adapted to handle certain events in a probability space involving random permutations. Their method for resampling an event is based on the Fischer-Yates shuffle. This scenario can also be handled by our framework; their resampling method perfectly satisfies the criteria of a . The Harris-Srinivasan's result is stronger than ours in that they do prove an analog of the witness tree lemma. Consequently their algorithm requires fewer resamplings than ours, and they are able to derive parallel variants of their algorithm. The work of Harris and Srinivasan is technically challenging, and generalizing it to a more abstract setting seems daunting.
{ "cite_N": [ "@cite_3" ], "mid": [ "2400716891" ], "abstract": [ "While there has been significant progress on algorithmic aspects of the Lovasz Local Lemma (LLL) in recent years, a noteworthy exception is when the LLL is used in the context of random permutations: the \"lopsided\" version of the LLL is usually at play here, and we do not yet have subexponential-time algorithms. We resolve this by developing a randomized polynomial-time algorithm for such applications. A noteworthy application is for Latin Transversals: the best-known general result here (, improving on Erdos and Spencer), states that any n x n matrix in which each entry appears at most (27 256)n times, has a Latin transversal. We present the first polynomial-time algorithm to construct such a transversal. Our approach also yields RNC algorithms: for Latin transversals, as well as the first efficient ones for the strong chromatic number and (special cases of) acyclic edge-coloring." ] }
1504.02044
2201340609
The Lovasz Local Lemma is a seminal result in probabilistic combinatorics. It gives a sufficient condition on a probability space and a collection of events for the existence of an outcome that simultaneously avoids all of those events. Finding such an outcome by an efficient algorithm has been an active research topic for decades. Breakthrough work of Moser and Tardos (2009) presented an efficient algorithm for a general setting primarily characterized by a product structure on the probability space. In this work we present an efficient algorithm for a much more general setting. Our main assumption is that there exist certain functions, called resampling oracles, that can be invoked to address the undesired occurrence of the events. We show that, in all scenarios to which the original Lovasz Local Lemma applies, there exist resampling oracles, although they are not necessarily efficient. Nevertheless, for essentially all known applications of the Lovasz Local Lemma and its generalizations, we have designed efficient resampling oracles. As applications of these techniques, we present new results for packings of Latin transversals, rainbow matchings and rainbow spanning trees.
@cite_24 show that a variant of Moser's algorithm gives an algorithmic proof in the variable model of the symmetric LLL. While this result is relatively limited when compared to the results above, their analysis is a clear example of forward-looking combinatorial analysis. Whereas Moser and Tardos use a argument to find witness trees in the algorithm's log'', analyze a structure: the tree of resampled events and their dependencies, looking forward in time. This viewpoint seems more natural and suitable for extensions.
{ "cite_N": [ "@cite_24" ], "mid": [ "1810595280" ], "abstract": [ "The algorithm for Lovasz Local Lemma by Moser and Tardos gives a constructive way to prove the existence of combinatorial objects that satisfy a system of constraints. We present an alternative probabilistic analysis of the algorithm that does not involve reconstructing the history of the algorithm from the witness tree. We apply our technique to improve the best known upper bound to acyclic chromatic index. Specifically we show that a graph with maximum degree Δ has an acyclic proper edge coloring with at most ⌈3.74(Δ − 1)⌉ + 1 colors, whereas the previously known best bound was 4(Δ − 1). The same technique is also applied to improve corresponding bounds for graphs with bounded girth. An interesting aspect of this application is that the probability of the \"undesirable\" events do not have a uniform upper bound, i.e. it constitutes a case of the asymmetric Lovasz Local Lemma." ] }
1504.02044
2201340609
The Lovasz Local Lemma is a seminal result in probabilistic combinatorics. It gives a sufficient condition on a probability space and a collection of events for the existence of an outcome that simultaneously avoids all of those events. Finding such an outcome by an efficient algorithm has been an active research topic for decades. Breakthrough work of Moser and Tardos (2009) presented an efficient algorithm for a general setting primarily characterized by a product structure on the probability space. In this work we present an efficient algorithm for a much more general setting. Our main assumption is that there exist certain functions, called resampling oracles, that can be invoked to address the undesired occurrence of the events. We show that, in all scenarios to which the original Lovasz Local Lemma applies, there exist resampling oracles, although they are not necessarily efficient. Nevertheless, for essentially all known applications of the Lovasz Local Lemma and its generalizations, we have designed efficient resampling oracles. As applications of these techniques, we present new results for packings of Latin transversals, rainbow matchings and rainbow spanning trees.
Our approach can be roughly described as forward-looking analysis with a careful modification of the Moser-Tardos algorithm, formulated in the framework of . Our main conceptual contribution is the simple definition of the , which allows the resamplings to be readily incorporated into the forward-looking analysis. Our modification of the Moser-Tardos algorithm is designed to combine this analysis with the technology of stable set sequences'' @cite_9 , defined in stable-set-sequences , which allows us to accommodate various LLL criteria, including Shearer's criterion. This plays a fundamental role in the full proof of LLL-tight-result .
{ "cite_N": [ "@cite_9" ], "mid": [ "2066969286" ], "abstract": [ "Beck's early work [3] gave an efficient version of the Lovasz Local Lemma(LLL) with significant compromise in the parameters. Following several improvements [1,7,4,13], Moser [8], and Moser and Tardos [9] obtained asymptotically optimal results in terms of the maximal degree. For a fixed dependency graph G the exact criterion under which LLL applies is given by Shearer in [12]. For a dependency structure G, let LO(G) be the set of those probability assignments to the nodes of G for which the Lovasz Local Lemma holds. We show that: Both the sequential and parallel ersions of the Moser-Tardos algorithm are efficient up to the Shearer's bound, by giving a tighter analysis. We also prove that, whenever p ∈ LO(G) (1+e), the expected running times of the sequential and parallel versions are at most n e and O(1 e log n e), the later when e" ] }
1504.02044
2201340609
The Lovasz Local Lemma is a seminal result in probabilistic combinatorics. It gives a sufficient condition on a probability space and a collection of events for the existence of an outcome that simultaneously avoids all of those events. Finding such an outcome by an efficient algorithm has been an active research topic for decades. Breakthrough work of Moser and Tardos (2009) presented an efficient algorithm for a general setting primarily characterized by a product structure on the probability space. In this work we present an efficient algorithm for a much more general setting. Our main assumption is that there exist certain functions, called resampling oracles, that can be invoked to address the undesired occurrence of the events. We show that, in all scenarios to which the original Lovasz Local Lemma applies, there exist resampling oracles, although they are not necessarily efficient. Nevertheless, for essentially all known applications of the Lovasz Local Lemma and its generalizations, we have designed efficient resampling oracles. As applications of these techniques, we present new results for packings of Latin transversals, rainbow matchings and rainbow spanning trees.
Our second contribution is a technical idea concerning slack in the LLL criteria. This idea is a perfectly valid statement regarding the existential LLL as well, although we will exploit it algorithmically. One drawback of the forward-looking analysis is that it naturally leads to an exponential bound on the number of resamplings, unless there is some slack in the LLL criterion; this same issue arises in @cite_5 @cite_24 . Our idea eliminates the need for slack in the and criteria. We prove that, even if or are tight, we can instead perform our analysis using Shearer's criterion, which is never tight because it defines an open set. For example, consider the familiar case of LLL , and suppose that holds with equality, i.e., @math for all @math . We show that the conclusion of the LLL remains true even if each event @math actually had the larger probability @math . The proof of this fact crucially uses Shearer's criterion and it does not seem to follow from more elementary tools @cite_2 @cite_18 .
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_18", "@cite_2" ], "mid": [ "1810595280", "2078116611", "2051336598", "131018274" ], "abstract": [ "The algorithm for Lovasz Local Lemma by Moser and Tardos gives a constructive way to prove the existence of combinatorial objects that satisfy a system of constraints. We present an alternative probabilistic analysis of the algorithm that does not involve reconstructing the history of the algorithm from the witness tree. We apply our technique to improve the best known upper bound to acyclic chromatic index. Specifically we show that a graph with maximum degree Δ has an acyclic proper edge coloring with at most ⌈3.74(Δ − 1)⌉ + 1 colors, whereas the previously known best bound was 4(Δ − 1). The same technique is also applied to improve corresponding bounds for graphs with bounded girth. An interesting aspect of this application is that the probability of the \"undesirable\" events do not have a uniform upper bound, i.e. it constitutes a case of the asymmetric Lovasz Local Lemma.", "A folklore result uses the Lovasz local lemma to analyze the discrepancy of hypergraphs with bounded degree and edge size. We generalize this result to the context of real matrices with bounded row and column sums.", "Abstract A probability theorem, due to Lovasz, is used to derive lower bounds for various Ramsey functions. A short proof of the known result R(3, t) ⩾ ct 2 ( ln t) 2 is given.", "" ] }
1504.02429
1621220598
A finite ergodic Markov chain is said to exhibit cutoff if its distance to stationarity remains close to 1 over a certain number of iterations and then abruptly drops to near 0 on a much shorter time scale. Discovered in the context of card shuffling (Aldous-Diaconis, 1986), this phenomenon is now believed to be rather typical among fast mixing Markov chains. Yet, establishing it rigorously often requires a challengingly detailed understanding of the underlying chain. Here we consider non-backtracking random walks on random graphs with a given degree sequence. Under a general sparsity condition, we establish the cutoff phenomenon, determine its precise window, and prove that the (suitably rescaled) cutoff profile approaches a remarkably simple, universal shape.
In the non-regular case however, the tight correspondence between the and the breaks down, and there seems to be no direct way of transferring our main result to the . We note that the latter should exhibit cutoff since the product condition holds, as can be seen from the fact that the of sparse random graphs with a given degree sequence remains bounded away from @math (see ). Confirming this constitutes a challenging open problem. In particular, it would be interesting to see whether the still mixes faster than the . To the best of our knowledge, no precise conjectural expression for the mixing time of the has been put forward During the finalization of the manuscript, a solution to this problem has been announced @cite_0 . .
{ "cite_N": [ "@cite_0" ], "mid": [ "1922672875" ], "abstract": [ "We prove a conjecture raised by the work of Diaconis and Shahshahani (1981) about the mixing time of random walks on the permutation group induced by a given conjugacy class. To do this we exploit a connection with coalescence and fragmentation processes and control the Kantorovitch distance by using a variant of a coupling due to Oded Schramm. Recasting our proof in the language of Ricci curvature, our proof establishes the occurrence of a phase transition, which takes the following form in the case of random transpositions: at time @math , the curvature is asymptotically zero for @math and is strictly positive for @math ." ] }