aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1209.2185
|
2949774333
|
We present a fast algorithm for approximate Canonical Correlation Analysis (CCA). Given a pair of tall-and-thin matrices, the proposed algorithm first employs a randomized dimensionality reduction transform to reduce the size of the input matrices, and then applies any CCA algorithm to the new pair of matrices. The algorithm computes an approximate CCA to the original pair of matrices with provable guarantees, while requiring asymptotically less operations than the state-of-the-art exact algorithms.
|
suggest a two-stage approach which involves first solving a least-squares problem, and then using the solution to reduce the problem size @cite_26 . However, their technique involves explicitly factoring one of the two matrices, which takes cubic time. Therefore, their method is especially effective when one of the two matrices has significantly less columns than the other. When the two matrices have about the same number of columns, there is no asymptotic performance gain. In contrast, our method is sub-cubic in any case.
|
{
"cite_N": [
"@cite_26"
],
"mid": [
"2020111956"
],
"abstract": [
"Dimensionality reduction plays an important role in many data mining applications involving high-dimensional data. Many existing dimensionality reduction techniques can be formulated as a generalized eigenvalue problem, which does not scale to large-size problems. Prior work transforms the generalized eigenvalue problem into an equivalent least squares formulation, which can then be solved efficiently. However, the equivalence relationship only holds under certain assumptions without regularization, which severely limits their applicability in practice. In this paper, an efficient two-stage approach is proposed to solve a class of dimensionality reduction techniques, including Canonical Correlation Analysis, Orthonormal Partial Least Squares, linear Discriminant Analysis, and Hypergraph Spectral Learning. The proposed two-stage approach scales linearly in terms of both the sample size and data dimensionality. The main contributions of this paper include (1) we rigorously establish the equivalence relationship between the proposed two-stage approach and the original formulation without any assumption; and (2) we show that the equivalence relationship still holds in the regularization setting. We have conducted extensive experiments using both synthetic and real-world data sets. Our experimental results confirm the equivalence relationship established in this paper. Results also demonstrate the scalability of the proposed two-stage approach."
]
}
|
1209.2400
|
37757906
|
This paper defines a method for lexicon in the biomedical domain from comparable corpora. The methodis based on compositionaltranslation and exploits morpheme-level translation equivalences. It can generate translations for a large variety of morphologicallyconstructed words and can also generate ’fertile’ translations. We show that fertile translations increase the overall quality of the extracted lexicon for English to French translation.
|
We chose to work in the framework of compositionality-based translation because: (i) compositional terms form more than 60 (ii) compositionality-based methods have been shown to clearly outperform context-based ones for the translation of terms with compositional meaning @cite_13 (iii) we believe that compositionality-based methods offer the opportunity to generate fertile translations if combined with a morphology-based approach.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2054890041"
],
"abstract": [
"The automatic compilation of bilingual lists of terms from specialized comparable corpora using lexical alignment has been successful for single-word terms (SWTs), but remains disappointing for multi-word terms (MWTs). The low frequency and the variability of the syntactic structures of MWTs in the source and the target languages are the main reported problems. This paper defines a general framework dedicated to the lexical alignment of MWTs from comparable corpora that includes a compositional translation process and the standard lexical context analysis. The compositional method which is based on the translation of lexical items being restrictive, we introduce an extended compositional method that bridges the gap between MWTs of different syntactic structures through morphological links. We experimented with the two compositional methods for the French–Japanese alignment task. The results show a significant improvement for the translation of MWTs and advocate further morphological analysis in lexical alignment."
]
}
|
1209.2400
|
37757906
|
This paper defines a method for lexicon in the biomedical domain from comparable corpora. The methodis based on compositionaltranslation and exploits morpheme-level translation equivalences. It can generate translations for a large variety of morphologicallyconstructed words and can also generate ’fertile’ translations. We show that fertile translations increase the overall quality of the extracted lexicon for English to French translation.
|
Lexical compositional translation @cite_2 @cite_9 @cite_11 @cite_13 deals with multi-word term to multi-word term alignment and uses lexical words as opposed to grammatical words: preposition, determiners, etc. as atomic components : is translated into French by translating as and as using dictionary lookup. Recomposition may be done by permutating the translated components @cite_13 or with translation patterns @cite_9 .
|
{
"cite_N": [
"@cite_9",
"@cite_11",
"@cite_13",
"@cite_2"
],
"mid": [
"1972421427",
"204594949",
"2054890041",
"1578564752"
],
"abstract": [
"",
"We propose a method for compiling bilingual terminologies of multi-word terms (MWTs) for given translation pairs of seed terms. Traditional methods for bilingual terminology compilation exploit parallel texts, while the more recent ones have focused on comparable corpora. We use bilingual corpora collected from the web and tailor made for the seed terms. For each language, we extract from the corpus a set of MWTs pertaining to the seed’s semantic domain, and use a compositional method to align MWTs from both sets. We increase the coverage of our system by using thesauri and by applying a bootstrap method. Experimental results show high precision and indicate promising prospects for future developments.",
"The automatic compilation of bilingual lists of terms from specialized comparable corpora using lexical alignment has been successful for single-word terms (SWTs), but remains disappointing for multi-word terms (MWTs). The low frequency and the variability of the syntactic structures of MWTs in the source and the target languages are the main reported problems. This paper defines a general framework dedicated to the lexical alignment of MWTs from comparable corpora that includes a compositional translation process and the standard lexical context analysis. The compositional method which is based on the translation of lexical items being restrictive, we introduce an extended compositional method that bridges the gap between MWTs of different syntactic structures through morphological links. We experimented with the two compositional methods for the French–Japanese alignment task. The results show a significant improvement for the translation of MWTs and advocate further morphological analysis in lexical alignment.",
"The WWW is two orders of magnitude larger than the largest corpora. Although noisy, web text presents language as it is used, and statistics derived from the Web can have practical uses in many NLP applications. For this reason, the WWW should be seen and studied as any other computationally available linguistic resource. In this article, we illustrate this by showing that an Example-Based approach to lexical choice for machine translation can use the Web as an adequate and free resource."
]
}
|
1209.1873
|
2952594493
|
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.
|
DCA methods are related to decomposition methods @cite_4 @cite_17 . While several experiments have shown that decomposition methods are inferior to SGD for large scale SVM @cite_1 @cite_9 , @cite_11 recently argued that SDCA outperform the SGD approach in some regimes. For example, this occurs when we need relatively high solution accuracy so that either SGD or SDCA has to be run for more than a few passes over the data.
|
{
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_1",
"@cite_17",
"@cite_11"
],
"mid": [
"1512098439",
"2113651538",
"2142623206",
"1574862351",
"2165966284"
],
"abstract": [
"This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because large matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while a standard projected conjugate gradient (PCG) chunking algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. For the MNIST database, SMO is as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be more than 1000 times faster than the PCG chunking algorithm.",
"This contribution develops a theoretical framework that takes into account the effect of approximate optimization on learning algorithms. The analysis shows distinct tradeoffs for the case of small-scale and large-scale learning problems. Small-scale learning problems are subject to the usual approximation-estimation tradeoff. Large-scale learning problems are subject to a qualitatively different tradeoff involving the computational complexity of the underlying optimization algorithms in non-trivial ways.",
"We describe and analyze a simple and effective iterative algorithm for solving the optimization problem cast by Support Vector Machines (SVM). Our method alternates between stochastic gradient descent steps and projection steps. We prove that the number of iterations required to obtain a solution of accuracy e is O(1 e). In contrast, previous analyses of stochastic gradient descent methods require Ω (1 e2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1 λ, where λ is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is O (d (λe)), where d is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach can seamlessly be adapted to employ non-linear kernels while working solely on the primal objective function. We demonstrate the efficiency and applicability of our approach by conducting experiments on large text classification problems, comparing our solver to existing state-of-the-art SVM solvers. For example, it takes less than 5 seconds for our solver to converge when solving a text classification problem from Reuters Corpus Volume 1 (RCV1) with 800,000 training examples.",
"",
"In many applications, data appear with a huge number of instances as well as features. Linear Support Vector Machines (SVM) is one of the most popular tools to deal with such large-scale sparse data. This paper presents a novel dual coordinate descent method for linear SVM with L1-and L2-loss functions. The proposed method is simple and reaches an e-accurate solution in O(log(1 e)) iterations. Experiments indicate that our method is much faster than state of the art solvers such as Pegasos, TRON, SVMperf, and a recent primal coordinate descent implementation."
]
}
|
1209.1873
|
2952594493
|
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.
|
However, our theoretical understanding of SDCA is not satisfying. Several authors (e.g. @cite_5 @cite_11 ) proved a linear convergence rate for solving SVM with DCA (not necessarily stochastic). The basic technique is to adapt the linear convergence of coordinate ascent that was established by @cite_2 . The linear convergence means that it achieves a rate of @math after @math passes over the data, where @math . This convergence result tells us that after an unspecified number of iterations, the algorithm converges faster to the optimal solution than SGD.
|
{
"cite_N": [
"@cite_5",
"@cite_2",
"@cite_11"
],
"mid": [
"2100038678",
"2013850411",
"2165966284"
],
"abstract": [
"Successive overrelaxation (SOR) for symmetric linear complementarity problems and quadratic programs is used to train a support vector machine (SVM) for discriminating between the elements of two massive datasets, each with millions of points. Because SOR handles one point at a time, similar to Platt's sequential minimal optimization (SMO) algorithm (1999) which handles two constraints at a time and Joachims' SVM sup light (1998) which handles a small number of points at a time, SOR can process very large datasets that need not reside in memory. The algorithm converges linearly to a solution. Encouraging numerical results are presented on datasets with up to 10 000 000 points. Such massive discrimination problems cannot be processed by conventional linear or quadratic programming methods, and to our knowledge have not been solved by other methods. On smaller problems, SOR was faster than SVM sup light and comparable or faster than SMO.",
"The coordinate descent method enjoys a long history in convex differentiable minimization. Surprisingly, very little is known about the convergence of the iterates generated by this method. Convergence typically requires restrictive assumptions such as that the cost function has bounded level sets and is in some sense strictly convex. In a recent work, Luo and Tseng showed that the iterates are convergent for the symmetric monotone linear complementarity problem, for which the cost function is convex quadratic, but not necessarily strictly convex, and does not necessarily have bounded level sets. In this paper, we extend these results to problems for which the cost function is the composition of an affine mapping with a strictly convex function which is twice differentiable in its effective domain. In addition, we show that the convergence is at least linear. As a consequence of this result, we obtain, for the first time, that the dual iterates generated by a number of existing methods for matrix balancing and entropy optimization are linearly convergent.",
"In many applications, data appear with a huge number of instances as well as features. Linear Support Vector Machines (SVM) is one of the most popular tools to deal with such large-scale sparse data. This paper presents a novel dual coordinate descent method for linear SVM with L1-and L2-loss functions. The proposed method is simple and reaches an e-accurate solution in O(log(1 e)) iterations. Experiments indicate that our method is much faster than state of the art solvers such as Pegasos, TRON, SVMperf, and a recent primal coordinate descent implementation."
]
}
|
1209.1873
|
2952594493
|
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.
|
However, there are two problems with this analysis. First, the linear convergence parameter, @math , may be very close to zero and the initial unspecified number of iterations might be very large. In fact, while the result of @cite_2 does not explicitly specify @math , an examine of their proof shows that @math is proportional to the smallest nonzero eigenvalue of @math , where @math is the @math data matrix with its @math -th row be the @math -th data point @math . For example if two data points @math becomes closer and closer, then @math . This dependency is problematic in the data laden domain, and we note that such a dependency does not occur in the analysis of SGD.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2013850411"
],
"abstract": [
"The coordinate descent method enjoys a long history in convex differentiable minimization. Surprisingly, very little is known about the convergence of the iterates generated by this method. Convergence typically requires restrictive assumptions such as that the cost function has bounded level sets and is in some sense strictly convex. In a recent work, Luo and Tseng showed that the iterates are convergent for the symmetric monotone linear complementarity problem, for which the cost function is convex quadratic, but not necessarily strictly convex, and does not necessarily have bounded level sets. In this paper, we extend these results to problems for which the cost function is the composition of an affine mapping with a strictly convex function which is twice differentiable in its effective domain. In addition, we show that the convergence is at least linear. As a consequence of this result, we obtain, for the first time, that the dual iterates generated by a number of existing methods for matrix balancing and entropy optimization are linearly convergent."
]
}
|
1209.1873
|
2952594493
|
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.
|
In addition, @cite_0 , and later @cite_7 have analyzed randomized versions of coordinate descent for unconstrained and constrained minimization of smooth convex functions. [Theorem 4] HsiehChLiKeSu08 applied these results to the dual SVM formulation. However, the resulting convergence rate is @math which is, as mentioned before, inferior to the results we obtain here. Furthermore, neither of these analyses can be applied to logistic regression due to their reliance on the smoothness of the dual objective function which is not satisfied for the dual formulation of logistic regression. We shall also point out again that all of these bounds are for the dual sub-optimality, while as mentioned before, we are interested in the primal sub-optimality.
|
{
"cite_N": [
"@cite_0",
"@cite_7"
],
"mid": [
"2154685722",
"2095984592"
],
"abstract": [
"We describe and analyze two stochastic methods for l1 regularized loss minimization problems, such as the Lasso. The first method updates the weight of a single feature at each iteration while the second method updates the entire weight vector but only uses a single training example at each iteration. In both methods, the choice of feature or example is uniformly at random. Our theoretical runtime analysis suggests that the stochastic methods should outperform state-of-the-art deterministic approaches, including their deterministic counterparts, when the size of the problem is large. We demonstrate the advantage of stochastic methods by experimenting with synthetic and natural data sets.",
"In this paper we propose new methods for solving huge-scale optimization problems. For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. Surprisingly enough, for certain classes of objective functions, our results are better than the standard worst-case bounds for deterministic algorithms. We present constrained and unconstrained versions of the method, and its accelerated variant. Our numerical test confirms a high efficiency of this technique on problems of very big size."
]
}
|
1209.1873
|
2952594493
|
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.
|
In this paper we derive new bounds on the duality gap (hence, they also imply bounds on the primal sub-optimality) of SDCA. These bounds are superior to earlier results, and our analysis only holds for randomized (stochastic) dual coordinate ascent. As we will see from our experiments, randomization is important in practice. In fact, the practical convergence behavior of (non-stochastic) cyclic dual coordinate ascent (even with a random ordering of the data) can be slower than our theoretical bounds for SDCA, and thus cyclic DCA is inferior to SDCA. In this regard, we note that some of the earlier analysis such as @cite_2 can be applied both to stochastic and to cyclic dual coordinate ascent methods with similar results. This means that their analysis, which can be no better than the behavior of cyclic dual coordinate ascent, is inferior to our analysis.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2013850411"
],
"abstract": [
"The coordinate descent method enjoys a long history in convex differentiable minimization. Surprisingly, very little is known about the convergence of the iterates generated by this method. Convergence typically requires restrictive assumptions such as that the cost function has bounded level sets and is in some sense strictly convex. In a recent work, Luo and Tseng showed that the iterates are convergent for the symmetric monotone linear complementarity problem, for which the cost function is convex quadratic, but not necessarily strictly convex, and does not necessarily have bounded level sets. In this paper, we extend these results to problems for which the cost function is the composition of an affine mapping with a strictly convex function which is twice differentiable in its effective domain. In addition, we show that the convergence is at least linear. As a consequence of this result, we obtain, for the first time, that the dual iterates generated by a number of existing methods for matrix balancing and entropy optimization are linearly convergent."
]
}
|
1209.1873
|
2952594493
|
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.
|
Recently, @cite_12 derived a stochastic coordinate ascent for structural SVM based on the Frank-Wolfe algorithm. Specifying one variant of their algorithm to binary classification with the hinge loss, yields the SDCA algorithm for the hinge-loss. The rate of convergence @cite_12 derived for their algorithm is the same as the rate we derive for SDCA with a Lipschitz loss function.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"1877183374"
],
"abstract": [
"We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full Frank-Wolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. However, unlike stochastic subgradient methods, the block-coordinate Frank-Wolfe algorithm allows us to compute the optimal step-size and yields a computable duality gap guarantee. Our experiments indicate that this simple algorithm outperforms competing structural SVM solvers."
]
}
|
1209.1873
|
2952594493
|
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.
|
Another relevant approach is the Stochastic Average Gradient (SAG), that has recently been analyzed in @cite_13 . There, a convergence rate of @math rate is shown, for the case of smooth losses, assuming that @math . This matches our guarantee in the regime @math .
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2120350480"
],
"abstract": [
"We propose a new stochastic gradient method for optimizing the sum of a nite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training objective and reducing the testing objective quickly."
]
}
|
1209.1797
|
2166101296
|
XML transactions are used in many information systems to store data and interact with other systems. Abnormal transactions, the result of either an on-going cyber attack or the actions of a benign user, can potentially harm the interacting systems and therefore they are regarded as a threat. In this paper we address the problem of anomaly detection and localization in XML transactions using machine learning techniques. We present a new XML anomaly detection framework, XML-AD. Within this framework, an automatic method for extracting features from XML transactions was developed as well as a practical method for transforming XML features into vectors of fixed dimensionality. With these two methods in place, the XML-AD framework makes it possible to utilize general learning algorithms for anomaly detection. Central to the functioning of the framework is a novel multi-univariate anomaly detection algorithm, ADIFA. The framework was evaluated on four XML transactions datasets, captured from real information systems, in which it achieved over 89 true positive detection rate with less than a 0.2 false positive rate.
|
Premalatha and Natarajan @cite_6 mine negative association rules @cite_23 , which are used to describe those relationships between item sets that indicate the occurrence of some item sets by the absence of others. The chi-square test is used to identify independent attributes and the anomalies are identified as a negative association rule whose confidence value is greater than a minimum confidence threshold. Unfortunately, domain knowledge of the data sets is incorporated into filter rules, a step that does not contribute to the detection process.
|
{
"cite_N": [
"@cite_23",
"@cite_6"
],
"mid": [
"2026562765",
"2045006855"
],
"abstract": [
"This paper presents an efficient method for mining both positive and negative association rules in databases. The method extends traditional associations to include association rules of forms A ⇒ ¬ B, ¬ A ⇒ B, and ¬ A ⇒ ¬ B, which indicate negative associations between itemsets. With a pruning strategy and an interestingness measure, our method scales to large databases. The method has been evaluated using both synthetic and real-world databases, and our experimental results demonstrate its effectiveness and efficiency.",
"Anomaly detection is the double purpose of discovering interesting exceptions and identifying incorrect data in huge amounts of data. Since anomalies are rare events, which violate the frequent relationships among data. Normally anomaly detection builds models of normal behavior and automatically detects significant deviations from it. The proposed system detects the anomalies in nested XML documents by independency between data. The negative association rules and the chi-square test for independency are applied on the data and a model of abnormal behavior is built as a signature profile. This signature profile can be used to identify the anomalies in the system. The proposed system limits the unnecessary rules for detecting anomalies."
]
}
|
1209.1797
|
2166101296
|
XML transactions are used in many information systems to store data and interact with other systems. Abnormal transactions, the result of either an on-going cyber attack or the actions of a benign user, can potentially harm the interacting systems and therefore they are regarded as a threat. In this paper we address the problem of anomaly detection and localization in XML transactions using machine learning techniques. We present a new XML anomaly detection framework, XML-AD. Within this framework, an automatic method for extracting features from XML transactions was developed as well as a practical method for transforming XML features into vectors of fixed dimensionality. With these two methods in place, the XML-AD framework makes it possible to utilize general learning algorithms for anomaly detection. Central to the functioning of the framework is a novel multi-univariate anomaly detection algorithm, ADIFA. The framework was evaluated on four XML transactions datasets, captured from real information systems, in which it achieved over 89 true positive detection rate with less than a 0.2 false positive rate.
|
H 'e v ' @cite_24 use probabilistic inference for classification and anomaly detection of structured documents which they test on XML documents. Specifically, they extract a feature vector from every XML document according to the number of attributes each tag can have. The features are learned and represented in a factorized form as a product of pairwise joint probability distribution functions according to a method introduced by Chow and Liu @cite_17 . Anomaly is detected by applying an acceptance threshold to the probability values. The authors indicate that this threshold should be trained and adapted for databases that are subject to frequent changes.
|
{
"cite_N": [
"@cite_24",
"@cite_17"
],
"mid": [
"2045825058",
"2163166770"
],
"abstract": [
"In this paper, we present a probabilistic method that can improve the efficiency of document classification when applied to structured documents. The analysis of the structure of a document is the starting point of document classification. Our method is designed to augment other classification schemes and complement pre-filtering information extraction procedures to reduce uncertainties. To this end, a probabilistic distribution on the structure of XML documents is introduced. We show how to parameterise existing learning methods to describe the structure distribution efficiently. The learned distribution is then used to predict the classes of unseen documents. Novelty detection making use of the structure-based distribution function is also discussed. Demonstration on model documents and on Internet XML documents are presented.",
"A method is presented to approximate optimally an n -dimensional discrete probability distribution by a product of second-order distributions, or the distribution of the first-order tree dependence. The problem is to find an optimum set of n - 1 first order dependence relationship among the n variables. It is shown that the procedure derived in this paper yields an approximation of a minimum difference in information. It is further shown that when this procedure is applied to empirical observations from an unknown distribution of tree dependence, the procedure is the maximum-likelihood estimate of the distribution."
]
}
|
1209.1797
|
2166101296
|
XML transactions are used in many information systems to store data and interact with other systems. Abnormal transactions, the result of either an on-going cyber attack or the actions of a benign user, can potentially harm the interacting systems and therefore they are regarded as a threat. In this paper we address the problem of anomaly detection and localization in XML transactions using machine learning techniques. We present a new XML anomaly detection framework, XML-AD. Within this framework, an automatic method for extracting features from XML transactions was developed as well as a practical method for transforming XML features into vectors of fixed dimensionality. With these two methods in place, the XML-AD framework makes it possible to utilize general learning algorithms for anomaly detection. Central to the functioning of the framework is a novel multi-univariate anomaly detection algorithm, ADIFA. The framework was evaluated on four XML transactions datasets, captured from real information systems, in which it achieved over 89 true positive detection rate with less than a 0.2 false positive rate.
|
@cite_3 detect anomalies in dynamic data feeds. Specifically, invariants such as value interval and arithmetic expressions are extracted and used as proxies to detect anomalies. The detection method is demonstrated for semantic anomalies, i.e., values that are syntactically correct but have unreasonable values. Two types of invariants are extracted, namely, the mean statistics and invariants that are produced by an adjusted version of a software for the detection of invariants in computer programs.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2146908974"
],
"abstract": [
"Much of the software we use for everyday purposes incorporates elements developed and maintained by someone other than the developer. These elements include not only code and databases but also dynamic data feeds from online data sources. Although everyday software is not mission critical, it must be dependable enough for practical use. This is limited by the dependability of the incorporated elements. It is particularly difficult to evaluate the dependability of dynamic data feeds, because they may be changed by their proprietors as they are used. Further, the specifications of these data feeds are often even sketchier than the specifications of software components. We demonstrate a method of inferring invariants about the normal behavior of dynamic data feeds. We use these invariants as proxies for specifications to perform on-going detection of anomalies in the data feed. We show the feasibility of our approach and demonstrate its usefulness for semantic anomaly detection: identifying occasions when a dynamic data feed is delivering unreasonable values, even though its behavior may be superficially acceptable (i.e., it is delivering parsable results in a timely fashion)."
]
}
|
1209.0833
|
2951675424
|
We propose a multiresolution Gaussian process to capture long-range, non-Markovian dependencies while allowing for abrupt changes. The multiresolution GP hierarchically couples a collection of smooth GPs, each defined over an element of a random nested partition. Long-range dependencies are captured by the top-level GP while the partition points define the abrupt changes. Due to the inherent conjugacy of the GPs, one can analytically marginalize the GPs and compute the conditional likelihood of the observations given the partition tree. This property allows for efficient inference of the partition itself, for which we employ graph-theoretic techniques. We apply the multiresolution GP to the analysis of Magnetoencephalography (MEG) recordings of brain activity.
|
One can formulate an mGP as an additive GP where each GP in the sum decomposes independently over the level-specific partition of the input space @math . The additive GPs of @cite_11 instead focus on coping with multivariate inputs, in a similar vain to hierarchical kernel learning @cite_13 . Thus, additive GPs address an inherently different task. Another formulation related in title, but fundamentally different is the hierarchical GP latent variable model of @cite_31 . The formulation takes latent variables at nodes of a fixed tree that are related via GP mappings.
|
{
"cite_N": [
"@cite_31",
"@cite_13",
"@cite_11"
],
"mid": [
"2066350599",
"1599445879",
""
],
"abstract": [
"The Gaussian process latent variable model (GP-LVM) is a powerful approach for probabilistic modelling of high dimensional data through dimensional reduction. In this paper we extend the GP-LVM through hierarchies. A hierarchical model (such as a tree) allows us to express conditional independencies in the data as well as the manifold structure. We first introduce Gaussian process hierarchies through a simple dynamical model, we then extend the approach to a more complex hierarchy which is applied to the visualisation of human motion data sets.",
"We consider the problem of high-dimensional non-linear variable selection for supervised learning. Our approach is based on performing linear selection among exponentially many appropriately defined positive definite kernels that characterize non-linear interactions between the original variables. To select efficiently from these many kernels, we use the natural hierarchical structure of the problem to extend the multiple kernel learning framework to kernels that can be embedded in a directed acyclic graph; we show that it is then possible to perform kernel selection through a graph-adapted sparsity-inducing norm, in polynomial time in the number of selected kernels. Moreover, we study the consistency of variable selection in high-dimensional settings, showing that under certain assumptions, our regularization framework allows a number of irrelevant variables which is exponential in the number of observations. Our simulations on synthetic datasets and datasets from the UCI repository show state-of-the-art predictive performance for non-linear regression problems.",
""
]
}
|
1209.0835
|
2952076207
|
Understanding social network structure and evolution has important implications for many aspects of network and system design including provisioning, bootstrapping trust and reputation systems via social networks, and defenses against Sybil attacks. Several recent results suggest that augmenting the social network structure with user attributes (e.g., location, employer, communities of interest) can provide a more fine-grained understanding of social networks. However, there have been few studies to provide a systematic understanding of these effects at scale. We bridge this gap using a unique dataset collected as the Google+ social network grew over time since its release in late June 2011. We observe novel phenomena with respect to both standard social network metrics and new attribute-related metrics (that we define). We also observe interesting evolutionary patterns as Google+ went from a bootstrap phase to a steady invitation-only stage before a public release. Based on our empirical observations, we develop a new generative model to jointly reproduce the social structure and the node attributes. Using theoretical analysis and empirical evaluations, we show that our model can accurately reproduce the social and attribute structure of real social networks. We also demonstrate that our model provides more accurate predictions for practical application contexts.
|
Modeling social networks There are two broad classes of models for generating social networks: and . Static models try to reproduce a single static network snapshot @cite_13 @cite_44 @cite_54 @cite_53 . Dynamic models can provide insights on how nodes arrive and create links; these include models such as preferential attachment @cite_84 , copying @cite_39 , nearest neighbor @cite_49 , forest fire @cite_71 . @cite_27 evaluated such models using both network metrics and application benchmarks and showed that the nearest neighbor model outperforms others. The dynamic generative model by Leskovec et al. mimics the nearest neighbor model in a dynamic setting @cite_21 , and thus we use it as our starting point in , However, these models are known to generate networks with power-law degree distributions. Many social networks including Google+, however, exhibit lognormal degree distributions @cite_3 @cite_20 @cite_86 . Our dynamic model extends these prior work to provably generate a lognormal distribution for social outdegree. Our model also provides a more general framework by capturing both social and attribute structure.
|
{
"cite_N": [
"@cite_54",
"@cite_53",
"@cite_21",
"@cite_84",
"@cite_39",
"@cite_44",
"@cite_3",
"@cite_27",
"@cite_49",
"@cite_71",
"@cite_86",
"@cite_13",
"@cite_20"
],
"mid": [
"",
"",
"2151078464",
"2008620264",
"2129620481",
"",
"2124301083",
"2070265204",
"2007541434",
"2111708605",
"",
"210736762",
""
],
"abstract": [
"",
"",
"We present a detailed study of network evolution by analyzing four large online social networks with full temporal information about node and edge arrivals. For the first time at such a large scale, we study individual node arrival and edge creation processes that collectively lead to macroscopic properties of networks. Using a methodology based on the maximum-likelihood principle, we investigate a wide variety of network formation strategies, and show that edge locality plays a critical role in evolution of networks. Our findings supplement earlier network models based on the inherently non-local preferential attachment. Based on our observations, we develop a complete model of network evolution, where nodes arrive at a prespecified rate and select their lifetimes. Each node then independently initiates edges according to a \"gap\" process, selecting a destination for each edge according to a simple triangle-closing model free of any parameters. We show analytically that the combination of the gap distribution with the node lifetime leads to a power law out-degree distribution that accurately reflects the true network in all four cases. Finally, we give model parameter settings that allow automatic evolution and generation of realistic synthetic networks of arbitrary scale.",
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.",
"The pages and hyperlinks of the World-Wide Web may be viewed as nodes and edges in a directed graph. This graph is a fascinating object of study: it has several hundred million nodes today, over a billion links, and appears to grow exponentially with time. There are many reasons -- mathematical, sociological, and commercial -- for studying the evolution of this graph. In this paper we begin by describing two algorithms that operate on the Web graph, addressing problems from Web search and automatic community discovery. We then report a number of measurements and properties of this graph that manifested themselves as we ran these algorithms on the Web. Finally, we observe that traditional random graph models do not explain these observations, and we propose a new family of random graph models. These models point to a rich new sub-field of the study of random graphs, and raise questions about the analysis of graph algorithms on the Web.",
"",
"We analyze the social network emerging from the user comment activity on the website Slashdot. The network presents common features of traditional social networks such as a giant component, small average path length and high clustering, but differs from them showing moderate reciprocity and neutral assortativity by degree. Using Kolmogorov-Smirnov statistical tests, we show that the degree distributions are better explained by log-normal instead of power-law distributions. We also study the structure of discussion threads using an intuitive radial tree representation. Threads show strong heterogeneity and self-similarity throughout the different nesting levels of a conversation. We use these results to propose a simple measure to evaluate the degree of controversy provoked by a post.",
"Access to realistic, complex graph datasets is critical to research on social networking systems and applications. Simulations on graph data provide critical evaluation of new systems and applications ranging from community detection to spam filtering and social web search. Due to the high time and resource costs of gathering real graph datasets through direct measurements, researchers are anonymizing and sharing a small number of valuable datasets with the community. However, performing experiments using shared real datasets faces three key disadvantages: concerns that graphs can be de-anonymized to reveal private information, increasing costs of distributing large datasets, and that a small number of available social graphs limits the statistical confidence in the results. The use of measurement-calibrated graph models is an attractive alternative to sharing datasets. Researchers can \"fit\" a graph model to a real social graph, extract a set of model parameters, and use them to generate multiple synthetic graphs statistically similar to the original graph. While numerous graph models have been proposed, it is unclear if they can produce synthetic graphs that accurately match the properties of the original graphs. In this paper, we explore the feasibility of measurement-calibrated synthetic graphs using six popular graph models and a variety of real social graphs gathered from the Facebook social network ranging from 30,000 to 3 million edges. We find that two models consistently produce synthetic graphs with common graph metric values similar to those of the original graphs. However, only one produces high fidelity results in our application-level benchmarks. While this shows that graph models can produce realistic synthetic graphs, it also highlights the fact that current graph metrics remain incomplete, and some applications expose graph properties that do not map to existing metrics.",
"The linear preferential attachment hypothesis has been shown to be quite successful in explaining the existence of networks with power-law degree distributions. It is then quite important to determine if this mechanism is the consequence of a general principle based on local rules. In this work it is claimed that an effective linear preferential attachment is the natural outcome of growing network models based on local rules. It is also shown that the local models offer an explanation for other properties like the clustering hierarchy and degree correlations recently observed in complex networks. These conclusions are based on both analytical and numerical results for different local rules, including some models already proposed in the literature.",
"How do real graphs evolve over time? What are \"normal\" growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include heavy tails for in- and out-degree distributions, communities, small-world phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time.Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time, with the number of edges growing super-linearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) or O(log(log n)).Existing graph generation models do not exhibit these types of behavior, even at a qualitative level. We provide a new graph generator, based on a \"forest fire\" spreading process, that has a simple, intuitive justification, requires very few parameters (like the \"flammability\" of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study.",
"",
"",
""
]
}
|
1209.0835
|
2952076207
|
Understanding social network structure and evolution has important implications for many aspects of network and system design including provisioning, bootstrapping trust and reputation systems via social networks, and defenses against Sybil attacks. Several recent results suggest that augmenting the social network structure with user attributes (e.g., location, employer, communities of interest) can provide a more fine-grained understanding of social networks. However, there have been few studies to provide a systematic understanding of these effects at scale. We bridge this gap using a unique dataset collected as the Google+ social network grew over time since its release in late June 2011. We observe novel phenomena with respect to both standard social network metrics and new attribute-related metrics (that we define). We also observe interesting evolutionary patterns as Google+ went from a bootstrap phase to a steady invitation-only stage before a public release. Based on our empirical observations, we develop a new generative model to jointly reproduce the social structure and the node attributes. Using theoretical analysis and empirical evaluations, we show that our model can accurately reproduce the social and attribute structure of real social networks. We also demonstrate that our model provides more accurate predictions for practical application contexts.
|
Modeling social-attribute networks There has been relatively little work on generating , though a few recent work jointly generating both social structure and node attributes can be viewed as models; the most relevant work is from @cite_57 and Kim and Leskovec @cite_2 . @cite_57 focus on dynamic attributes; their model generates undirected networks with power-law distribution for social degree and non-lognormal distribution for attribute degree (see Figure ). Kim and Leskovec model the social and attribute structure simultaneously @cite_2 . Here, both the social degree of attribute nodes and attribute degrees of social nodes follow binomial distribution, which differs from empirically observed . Our model can generate that we confirm through both analysis and simulations to be consistent with real .
|
{
"cite_N": [
"@cite_57",
"@cite_2"
],
"mid": [
"2047443612",
"2015326995"
],
"abstract": [
"Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone. This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.",
"Large scale real-world network data such as social and information networks are ubiquitous. The study of such networks seeks to find patterns and explain their emergence through tractable models. In most networks, and especially in social networks, nodes have a rich set of attributes associated with them. We present the Multiplicative Attribute Graphs (MAG) model, which naturally captures the interactions between the network structure and the node attributes. We consider a model where each node has a vector of categorical latent attributes associated with it. The probability of an edge between a pair of nodes depends on the product of individual attribute-attribute similarities. The model yields itself to mathematical analysis. We derive thresholds for the connectivity and the emergence of the giant connected component, and show that the model gives rise to networks with a constant diameter. We also show that MAG model can produce networks with either log-normal or power-law degree distributions."
]
}
|
1209.0654
|
1975752644
|
Optical Deflectometric Tomography (ODT) provides an accurate characterization of transparent materials whose complex surfaces present a real challenge for manufacture and control. In ODT, the refractive index map (RIM) of a transparent object is reconstructed by measuring light deflection under multiple orientations. We show that this imaging modality can be made "compressive", i.e., a correct RIM reconstruction is achievable with far less observations than required by traditional Filtered Back Projection (FBP) methods. Assuming a cartoon-shape RIM model, this reconstruction is driven by minimizing the map Total-Variation under a fidelity constraint with the available observations. Moreover, two other realistic assumptions are added to improve the stability of our approach: the map positivity and a frontier condition. Numerically, our method relies on an accurate ODT sensing model and on a primal-dual minimization scheme, including easily the sensing operator and the proposed RIM constraints. We conclude this paper by demonstrating the power of our method on synthetic and experimental data under various compressive scenarios. In particular, the compressiveness of the stabilized ODT problem is demonstrated by observing a typical gain of 20 dB compared to FBP at only 5 of 360 incident light angles for moderately noisy sensing.
|
In differential phase-contrast tomography, the refractive index distribution is recovered from phase-shifts measurements. These are composed by the derivative of the refractive index map, inducing the apparition of the frequency @math when using the FST, as it happens in the ODT sensing model (Sec. ). In this application, @cite_16 have used the FBP algorithm to reconstruct the refractive index map from a fully covered set of projections. @cite_38 @cite_13 have used different iterative schemes based on the minimization of the TV norm to reconstruct the refractive index distribution over a region of interest. These methods are accurate and provide similar results, but the iterative scheme based on the TV norm has proved to be better than FBP when the amount of acquisitions decreases.
|
{
"cite_N": [
"@cite_38",
"@cite_16",
"@cite_13"
],
"mid": [
"2033413209",
"1974177413",
"2126849957"
],
"abstract": [
"Differential phase-contrast interior tomography allows reconstruction of a refractive index distribution over a region of interest (ROI) for visualization and analysis of structures inside a large biological specimen. In the imaging mode, x-ray scanning only targets an ROI in an object and a narrow beam passes through the object, allowing a significant reduction of both radiation dose and system cost. Inspired by recently developed compressive sensing theory, in a numerical analysis framework we show that accurate interior reconstruction can be achieved on an ROI from truncated differential projection data through the ROI via the total variation minimization, assuming a piecewise constant distribution of the refractive indices in the ROI. Then, we develop a practical iterative algorithm for such an interior reconstruction and perform numerical experiments to demonstrate the feasibility of the proposed approach.",
"We report on a method for tomographic phase contrast imaging of centimeter sized objects. As opposed to existing techniques, our approach can be used with low-brilliance, lab based x-ray sources and thus is of interest for a wide range of applications in medicine, biology, and nondestructive testing. The work is based on the recent development of a hard x-ray grating interferometer, which has been demonstrated to yield differential phase contrast projection images. Here we particularly focus on how this method can be used for tomographic reconstructions using filtered back projection algorithms to yield quantitative volumetric information of both the real and imaginary part of the samples's refractive index",
"A novel quantitative method, based on the moire effect, for mapping ray deflections of a collimated light beam is described and. demonstrated. This method, which does not require coherent light, can replace interferometric techniques in many cases. The proposed setup is simple, and the interpretation is straightforward. We demonstrate deflection mapping of a candle’s flame with a resolution of 5 × 10−5 rad and lens mapping with a resolution of 10−2 rad. An analysis of the ray deflection for index-of-refraction mapping is provided."
]
}
|
1209.0654
|
1975752644
|
Optical Deflectometric Tomography (ODT) provides an accurate characterization of transparent materials whose complex surfaces present a real challenge for manufacture and control. In ODT, the refractive index map (RIM) of a transparent object is reconstructed by measuring light deflection under multiple orientations. We show that this imaging modality can be made "compressive", i.e., a correct RIM reconstruction is achievable with far less observations than required by traditional Filtered Back Projection (FBP) methods. Assuming a cartoon-shape RIM model, this reconstruction is driven by minimizing the map Total-Variation under a fidelity constraint with the available observations. Moreover, two other realistic assumptions are added to improve the stability of our approach: the map positivity and a frontier condition. Numerically, our method relies on an accurate ODT sensing model and on a primal-dual minimization scheme, including easily the sensing operator and the proposed RIM constraints. We conclude this paper by demonstrating the power of our method on synthetic and experimental data under various compressive scenarios. In particular, the compressiveness of the stabilized ODT problem is demonstrated by observing a typical gain of 20 dB compared to FBP at only 5 of 360 incident light angles for moderately noisy sensing.
|
In common Absorption Tomography (AT) we deal with the reconstruction of the absorption index distribution from intensity measurements. As these measurements are directly related to the absorption index, the AT sensing model does not include the frequency @math . In this domain, several works have exploited sparsity based methods. Most recent works in AT have focused on promoting a small TV norm @cite_27 @cite_33 . @cite_8 use a Lagrangian formulation for the tomographic reconstruction problem, promoting a small TV norm under a Kullback-Leiber data divergence and a positivity constraint. They aim at reconstructing a breast phantom from 60 projections with Poisson distributed noise. For this, they use the primal-dual optimization algorithm proposed by @cite_2 . The method results in high quality reconstruction compared to FBP but with a convergence result that is highly dependent on the Lagrangian parameter choosen.
|
{
"cite_N": [
"@cite_27",
"@cite_2",
"@cite_33",
"@cite_8"
],
"mid": [
"2071099763",
"2092663520",
"",
"2074567512"
],
"abstract": [
"In computed tomography there are different situations where reconstruction has to be performed with limited raw data. In the past few years it has been shown that algorithms which are based on compressed sensing theory are able to handle incomplete datasets quite well. As a cost function these algorithms use the l1-norm of the image after it has been transformed by a sparsifying transformation. This yields to an inequality-constrained convex optimization problem. Due to the large size of the optimization problem some heuristic optimization algorithms have been proposed in the past few years. The most popular way is optimizing the raw data and sparsity cost functions separately in an alternating manner. In this paper we will follow this strategy and present a new method to adapt these optimization steps. Compared to existing methods which perform similarly, the proposed method needs no a priori knowledge about the raw data consistency. It is ensured that the algorithm converges to the lowest possible value of the raw data cost function, while holding the sparsity constraint at a low value. This is achieved by transferring the step-size determination of both optimization procedures into the raw data domain, where they are adapted to each other. To evaluate the algorithm, we process measured clinical datasets. To cover a wide field of possible applications, we focus on the problems of angular undersampling, data lost due to met al implants, limited view angle tomography and interior tomography. In all cases the presented method reaches convergence within less than 25 iteration steps, while using a constant set of algorithm control parameters. The image artifacts caused by incomplete raw data are mostly removed without introducing new effects like staircasing. All scenarios are compared to an existing implementation of the ASD-POCS algorithm, which realizes the step-size adaption in a different way. Additional prior information as proposed by the PICCS algorithm can be incorporated easily into the optimization process.",
"In this paper we study a first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure. We prove convergence to a saddle-point with rate O(1 N) in finite dimensions for the complete class of problems. We further show accelerations of the proposed algorithm to yield improved rates on problems with some degree of smoothness. In particular we show that we can achieve O(1 N 2) convergence on problems, where the primal or the dual objective is uniformly convex, and we can show linear convergence, i.e. O(? N ) for some ??(0,1), on smooth problems. The wide applicability of the proposed algorithm is demonstrated on several imaging problems such as image denoising, image deconvolution, image inpainting, motion estimation and multi-label image segmentation.",
"",
"The primal–dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1–26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal–dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity x-ray illumination is presented."
]
}
|
1209.0654
|
1975752644
|
Optical Deflectometric Tomography (ODT) provides an accurate characterization of transparent materials whose complex surfaces present a real challenge for manufacture and control. In ODT, the refractive index map (RIM) of a transparent object is reconstructed by measuring light deflection under multiple orientations. We show that this imaging modality can be made "compressive", i.e., a correct RIM reconstruction is achievable with far less observations than required by traditional Filtered Back Projection (FBP) methods. Assuming a cartoon-shape RIM model, this reconstruction is driven by minimizing the map Total-Variation under a fidelity constraint with the available observations. Moreover, two other realistic assumptions are added to improve the stability of our approach: the map positivity and a frontier condition. Numerically, our method relies on an accurate ODT sensing model and on a primal-dual minimization scheme, including easily the sensing operator and the proposed RIM constraints. We conclude this paper by demonstrating the power of our method on synthetic and experimental data under various compressive scenarios. In particular, the compressiveness of the stabilized ODT problem is demonstrated by observing a typical gain of 20 dB compared to FBP at only 5 of 360 incident light angles for moderately noisy sensing.
|
@cite_27 use a constrained optimization formulation to reconstruct the absorption index from low amount of clinical data in the presence of met al implants and Gaussian noise. This problem is solved by means of an alternating method that allows then optimizing separately the raw data consistency function and the sparsity cost function, without the need of prior information on the observations. The fast convergence of the method is based on the estimation of the optimization steps. The gradient descent method is used to minimize the TV norm and the consistency term is minimized via an algebraic reconstruction technique. The method is proven to give better results than FBP.
|
{
"cite_N": [
"@cite_27"
],
"mid": [
"2071099763"
],
"abstract": [
"In computed tomography there are different situations where reconstruction has to be performed with limited raw data. In the past few years it has been shown that algorithms which are based on compressed sensing theory are able to handle incomplete datasets quite well. As a cost function these algorithms use the l1-norm of the image after it has been transformed by a sparsifying transformation. This yields to an inequality-constrained convex optimization problem. Due to the large size of the optimization problem some heuristic optimization algorithms have been proposed in the past few years. The most popular way is optimizing the raw data and sparsity cost functions separately in an alternating manner. In this paper we will follow this strategy and present a new method to adapt these optimization steps. Compared to existing methods which perform similarly, the proposed method needs no a priori knowledge about the raw data consistency. It is ensured that the algorithm converges to the lowest possible value of the raw data cost function, while holding the sparsity constraint at a low value. This is achieved by transferring the step-size determination of both optimization procedures into the raw data domain, where they are adapted to each other. To evaluate the algorithm, we process measured clinical datasets. To cover a wide field of possible applications, we focus on the problems of angular undersampling, data lost due to met al implants, limited view angle tomography and interior tomography. In all cases the presented method reaches convergence within less than 25 iteration steps, while using a constant set of algorithm control parameters. The image artifacts caused by incomplete raw data are mostly removed without introducing new effects like staircasing. All scenarios are compared to an existing implementation of the ASD-POCS algorithm, which realizes the step-size adaption in a different way. Additional prior information as proposed by the PICCS algorithm can be incorporated easily into the optimization process."
]
}
|
1209.0715
|
2137259347
|
Stochastic switching circuits are relay circuits that consist of stochastic switches called pswitches. The study of stochastic switching circuits has widespread applications in many fields of computer science, neuroscience, and biochemistry. In this paper, we discuss several properties of stochastic switching circuits, including robustness, expressibility, and probability approximation. First, we study the robustness, namely, the effect caused by introducing an error of size Є to each pswitch in a stochastic circuit. We analyze two constructions and prove that simple series-parallel circuits are robust to small error perturbations, while general series-parallel circuits are not. Specifically, the total error introduced by perturbations of size less than Є is bounded by a constant multiple of Є in a simple series-parallel circuit, independent of the size of the circuit. Next, we study the expressibility of stochastic switching circuits: Given an integer q and a pswitch set S = 1 q,2 q,...,q-1 q , can we synthesize any rational probability with denominator q^n (for arbitrary n) with a simple series-parallel stochastic switching circuit? We generalize previous results and prove that when q is a multiple of 2 or 3, the answer is yes. We also show that when q is a prime number larger than 3, the answer is no. Probability approximation is studied for a general case of an arbitrary pswitch set S = s_1, s_2,... , s_(|S|) . In this case, we propose an algorithm based on local optimization to approximate any desired probability. The analysis reveals that the approximation error of a switching circuit decreases exponentially with an increasing circuit size.
|
There are a number of studies focusing on synthesizing a simple physical device to generate desired probabilities. Gill @cite_3 @cite_14 discussed the problem of generating rational probabilities using a sequential state machine. Motivated by neural computation, provided an algorithm to generate binary sequences with probability @math from a set of stochastic binary sequences with probabilities in @math @cite_8 . Their method can be implemented using the concept of linear feedback shift registers. Recently, inspired by PCMOS technology @cite_13 , considered the synthesis of decimal probabilities using combinational logic @cite_1 . They have considered three different scenarios, depending on whether the given probabilities can be duplicated, and whether there is freedom to choose the probabilities. In contact to the foregoing contributions, we consider the properties and probability synthesis of stochastic switching circuits. Our approach is orthogonal and complementary to that of Qian and Riedel, which is based on combinational logic. Generally, each switching circuit can be equivalently expressed by a combinational logic circuit. All the constructive methods of stochastic switching circuits in this paper can be directly applied to probabilistic combinational logic circuits.
|
{
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_13"
],
"mid": [
"1987683681",
"2116676915",
"2013098447",
"2000617148",
"2030020312"
],
"abstract": [
"",
"The paper describes techniques for constructing statistically independent binary sequences with prescribed ratios of zeros and ones. The first construction is a general recursive construction, which forms the sequences from a class of \"elementary\" sequences. The second construction is a special construction which can be used when the ratio of ones to zeros is expressed in binary notation. The second construction is shown to be optimal in terms of the numbers of input sequences required to construct the desired sequence. The paper concludes with a discussion of how to generate independent \"elementary\" sequences using simple digital techniques. >",
"Schemes for probabilistic computation can exploit physical sources to generate random values in the form of bit streams. Generally, each source has a fixed bias and so provides bits with a specific probability of being one. If many different probability values are required, it can be expensive to generate all of these directly from physical sources. This paper demonstrates novel techniques for synthesizing combinational logic that transforms source probabilities into different target probabilities. We consider three scenarios in terms of whether the source probabilities are specified and whether they can be duplicated. In the case that the source probabilities are not specified and can be duplicated, we provide a specific choice, the set 0.4, 0.5 ; we show how to synthesize logic that transforms probabilities from this set into arbitrary decimal probabilities. Further, we show that for any integer n ≥ 2, there exists a single probability that can be transformed into arbitrary base-n fractional probabilities. In the case that the source probabilities are specified and cannot be duplicated, we provide two methods for synthesizing logic to transform them into target probabilities. In the case that the source probabilities are not specified, but once chosen cannot be duplicated, we provide an optimal choice.",
"Abstract The purpose of this paper is to show how one random-symbol generator (such as used for implementing Monte Carlo programs or for simulating probabilistic discrete-state systems) can be converted into a different random-symbol generator. Specifically, given the characteristics of an available source and of a desired source, a device is sought which would transform the available source into the desired one, within a prescribed accuracy level. It is shown, constructively, that such a “transformer” can always be realized in the form of a deterministic sequential machine. As a possible realization, a machine is chosen which can be constructed by means of conventional computer components and circuits; it also features adaptability to a wide variety of input, output and accuracy specifications.The characteristics of the machine are described through equations and graphs which facilitate practical design by relating the source specifications to the accuracy and the rate at which the output of the transformer is to be sampled. Also, a method is presented for minimizing the number of states in the machine—a vital step in simplifying the structure and increasing the output rate of the synthesized transformer.",
"Parameter variations, noise susceptibility, and increasing energy dissipation of cmos devices have been recognized as major challenges in circuit and microarchitecture design in the nanometer regime. Among these, parameter variations and noise susceptibility are increasingly causing cmos devices to behave in an “unreliable” or “probabilistic” manner. To address these challenges, a shift in design paradigm from current-day deterministic designs to “statistical” or “probabilistic” designs is deemed inevitable. To respond to this need, in this article, we introduce and study an entirely novel family of probabilistic architectures: the probabilistic system-on-a-chip (psoc). psoc architectures are based on cmos devices rendered probabilistic due to noise, referred to as probabilistic CMOS or PCMOS devices. We demonstrate that in addition to harnessing the probabilistic behavior of pcmos devices, psoc architectures yield significant improvements, both in energy consumed as well as performance in the context of probabilistic or randomized applications with broad utility. All of our application and architectural savings are quantified using the product of the energy and performance, denoted (energy × performance): The pcmos-based gains are as high as a substantial multiplicative factor of over 560 when compared to a competing energy-efficient cmos-based realization. Our architectural design is application specific and involves navigating design space spanning the algorithm (application), its architecture (psoc), and the probabilistic technology (pcmos)."
]
}
|
1209.0715
|
2137259347
|
Stochastic switching circuits are relay circuits that consist of stochastic switches called pswitches. The study of stochastic switching circuits has widespread applications in many fields of computer science, neuroscience, and biochemistry. In this paper, we discuss several properties of stochastic switching circuits, including robustness, expressibility, and probability approximation. First, we study the robustness, namely, the effect caused by introducing an error of size Є to each pswitch in a stochastic circuit. We analyze two constructions and prove that simple series-parallel circuits are robust to small error perturbations, while general series-parallel circuits are not. Specifically, the total error introduced by perturbations of size less than Є is bounded by a constant multiple of Є in a simple series-parallel circuit, independent of the size of the circuit. Next, we study the expressibility of stochastic switching circuits: Given an integer q and a pswitch set S = 1 q,2 q,...,q-1 q , can we synthesize any rational probability with denominator q^n (for arbitrary n) with a simple series-parallel stochastic switching circuit? We generalize previous results and prove that when q is a multiple of 2 or 3, the answer is yes. We also show that when q is a prime number larger than 3, the answer is no. Probability approximation is studied for a general case of an arbitrary pswitch set S = s_1, s_2,... , s_(|S|) . In this case, we propose an algorithm based on local optimization to approximate any desired probability. The analysis reveals that the approximation error of a switching circuit decreases exponentially with an increasing circuit size.
|
In the rest of this section, we introduce the original work that started the study on stochastic switching circuits (Wilhelm and Bruck @cite_4 ). Similar to resistor circuits @cite_9 , connecting one terminal of a switching circuit @math (where @math ) to one terminal of a circuit @math (where @math ) places them in series. The resulting circuit is closed if and only if both of @math and @math are closed, so the probability of the resulting circuit is @math Connecting both terminals of @math and @math together places the circuits in parallel. The resulting circuit is closed if and only if either @math or @math is closed, so the probability of the resulting circuit is @math Based on these rules, we can calculate the probability of any given ssp or sp circuit. For example, the probability of the circuit in Fig. (a) is @math and the probability of the circuit in Fig. (b) is @math
|
{
"cite_N": [
"@cite_9",
"@cite_4"
],
"mid": [
"1983575666",
"2019906979"
],
"abstract": [
"Resistances may be placed in circuit either in series or in parallel, or in various combinations of these methods. There are also networks such as the Wheatstone net, which are neither series nor parallel arrangements, and these are excluded from what follows. Every combination considered here consists of other combinations either in series or in parallel. Each must either consist of parallel combinations arranged in series, or of series combinations arranged in parallel. For the study of the subject it is convenient to make a few definitions. Two linear conductors may be placed either in series or in parallel. These arrangements may be defined as being conjugate the one of the other. It will be observed that the second of the arrangements merely differs in description from the first by the substitution of the word “parallel” for the word “series”. To generalise the notion we may define two combinations to be conjugate when their descriptions merely differ by the interchange of the words “series” and “parallel”. A single linear conductor should, for the sake of unity, be regarded as being either in series or in parallel, and, further, as being a combination which is self-conjugate. A combination consisting of parallel combinations in series will be called a series combination, and one consisting of series combinations in parallel a parallel combination. Thus, in the usual diagrammatic representation,",
"Shannon in his 1938 Masterpsilas Thesis demonstrated that any Boolean function can be realized by a switching relay circuit, leading to the development of deterministic digital logic. Here, we replace each classical switch with a probabilistic switch (pswitch). We present algorithms for synthesizing circuits closed with a desired probability, including an algorithm that generates optimal size circuits for any binary fraction. We also introduce a new duality property for series-parallel stochastic switching circuits. Finally, we construct a universal probability generator which maps deterministic inputs to arbitrary probabilistic outputs. Potential applications exist in the analysis and design of stochastic networks in biology and engineering."
]
}
|
1209.0715
|
2137259347
|
Stochastic switching circuits are relay circuits that consist of stochastic switches called pswitches. The study of stochastic switching circuits has widespread applications in many fields of computer science, neuroscience, and biochemistry. In this paper, we discuss several properties of stochastic switching circuits, including robustness, expressibility, and probability approximation. First, we study the robustness, namely, the effect caused by introducing an error of size Є to each pswitch in a stochastic circuit. We analyze two constructions and prove that simple series-parallel circuits are robust to small error perturbations, while general series-parallel circuits are not. Specifically, the total error introduced by perturbations of size less than Є is bounded by a constant multiple of Є in a simple series-parallel circuit, independent of the size of the circuit. Next, we study the expressibility of stochastic switching circuits: Given an integer q and a pswitch set S = 1 q,2 q,...,q-1 q , can we synthesize any rational probability with denominator q^n (for arbitrary n) with a simple series-parallel stochastic switching circuit? We generalize previous results and prove that when q is a multiple of 2 or 3, the answer is yes. We also show that when q is a prime number larger than 3, the answer is no. Probability approximation is studied for a general case of an arbitrary pswitch set S = s_1, s_2,... , s_(|S|) . In this case, we propose an algorithm based on local optimization to approximate any desired probability. The analysis reveals that the approximation error of a switching circuit decreases exponentially with an increasing circuit size.
|
An important and interesting question is that if @math is uniform, i.e., @math for some @math , what kind of probabilities can be realized using stochastic switching circuits? In @cite_4 , Wilhelm and Bruck proposed an optimal algorithm (called B-Algorithm) to realize all rational probabilities of the form @math with @math , using an ssp circuit when @math . In their algorithm, at most @math pswitches are used, which is optimal. They also proved that given the pswitch set @math , all rational probabilities @math with @math can be realized by an ssp circuit with at most @math pswitches; given the pswitch set @math , all rational probabilities @math with @math can be realized by an ssp circuit with at most @math pswitches.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2019906979"
],
"abstract": [
"Shannon in his 1938 Masterpsilas Thesis demonstrated that any Boolean function can be realized by a switching relay circuit, leading to the development of deterministic digital logic. Here, we replace each classical switch with a probabilistic switch (pswitch). We present algorithms for synthesizing circuits closed with a desired probability, including an algorithm that generates optimal size circuits for any binary fraction. We also introduce a new duality property for series-parallel stochastic switching circuits. Finally, we construct a universal probability generator which maps deterministic inputs to arbitrary probabilistic outputs. Potential applications exist in the analysis and design of stochastic networks in biology and engineering."
]
}
|
1209.0684
|
96948785
|
We propose to use Vehicular ad hoc networks (VANET) as the infrastructure for an urban cyber-physical system for gathering up-to-date data about a city, like traffic conditions or environmental parameters. In this context, it is critical to design a data collection protocol that enables retrieving the data from the vehicles in almost real-time in an efficient way for urban scenarios. We propose Back off-based Per-hop Forwarding (BPF), a broadcast-based receiver-oriented protocol that uses the destination location information to select the forwarding order among the nodes receiving the packet. BFP does not require nodes to exchange periodic messages with their neighbors communicating their locations to keep a low management message overhead. It uses geographic information about the final destination node in the header of each data packet to route it in a hop-by-hop basis. It takes advantage of redundant forwarding to increase packet delivery to a destination, what is more critical in an urban scenario than in a highway, where the road topology does not represent a challenge for forwarding. We evaluate the performance of the BPF protocol using ns-3 and a Manhattan grid topology and compare it with well-known broadcast suppression techniques. Our results show that BPF achieves significantly higher packet delivery rates at a reduced redundancy cost.
|
Ad hoc On-Demand Distance Vector (AODV) @cite_26 and Dynamic Source Routing (DSR) @cite_18 are reactive protocols originally designed for MANETs. A number of studies have simulated and compared the performance of these protocols for VANETs @cite_10 @cite_1 . @cite_10 , the authors introduce prediction-based AODV protocols: Predicted AODV (PRAODV) and Predicted AODV with Maximum lifetime (PRAODVM) that uses the speed and location information of nodes to predict the link lifetime. But these methods depend on the accuracy of the prediction method, which can be low in volatile networks. Another approach is to use cluster-based protocols to improve network scalability, which create a virtual network infrastructure by clustering the nodes. Many cluster-based routing protocols @cite_13 - @cite_2 have been studied in MANETs. But these techniques are very unstable in VANETs and clusters created by these techniques are too short-lived.
|
{
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_26",
"@cite_1",
"@cite_2",
"@cite_10"
],
"mid": [
"2163533836",
"2030824584",
"2102258543",
"2101773441",
"1567009542",
"2057489410"
],
"abstract": [
"This paper describes a self-organizing, multihop, mobile radio network which relies on a code-division access scheme for multimedia support. In the proposed network architecture, nodes are organized into nonoverlapping clusters. The clusters are independently controlled, and are dynamically reconfigured as the nodes move. This network architecture has three main advantages. First, it provides spatial reuse of the bandwidth due to node clustering. Second, bandwidth can be shared or reserved in a controlled fashion in each cluster. Finally, the cluster algorithm is robust in the face of topological changes caused by node motion, node failure, and node insertion removal. Simulation shows that this architecture provides an efficient, stable infrastructure for the integration of different types of traffic in a dynamic radio network.",
"",
"The Ad hoc On-Demand Distance Vector (AODV) routing protocol is intended for use by mobile nodes in an ad hoc network. It offers quick adaptation to dynamic link conditions, low processing and memory overhead, low network utilization, and determines unicast routes to destinations within the ad hoc network. It uses destination sequence numbers to ensure loop freedom at all times (even in the face of anomalous delivery of routing control messages), avoiding problems (such as \"counting to infinity\") associated with classical distance vector protocols.",
"An IVC (Inter-vehicle communication) network is a type of mobile ad hoc networks (MANET) in which high-speed vehicles send, receive, and forward packets via other vehicles on the roads. An IVC network can provide useful applications in future Intelligent Transportation Systems. However, due to frequent network topology changes, a routing path in an IVC network breaks easily. As such, a routing protocol proposed for general MANET (e.g., AODV) performs poorly in IVC networks. To address this problem, we designed and implemented an intelligent flooding-based routing protocol and conducted several field trials to evaluate its performance on the roads. Results obtained from field trials show that (1) our protocol outperforms AODV significantly on IVC networks, and (2) our protocol can make many useful services such as email, ftp, web, video conferencing, and video broadcasting applicable on IVC networks for vehicle users.",
"Efficient routing among a set of mobile hosts (also called nodes) is one of the most important functions in ad hoc wireless networks. Routing based on a connected dominating set is a promising approach, where the searching space for a route is reduced to nodes in the set. A set is dominating if all the nodes in the system are either in the set or neighbors of nodes in the set. In this paper, we propose a simple and efficient distributed algorithm for calculating connected dominating set in ad hoc wireless networks, where connections of nodes are determined by their geographical distances. We also propose an update recalculation algorithm for the connected dominating set when the topology of the ad hoc wireless network changes dynamically. Our simulation results show that the proposed approach outperforms a classical algorithm in terms of finding a small connected dominating set and doing so quickly. Our approach can be potentially used in designing efficient routing algorithms based on a connected dominating set.",
"Development in Wireless LAN and Cellular technologies has motivated recent efforts to integrate the two. This creates new application scenarios that were not possible before. Vehicles with Wireless LAN radios can use other vehicles with both Wireless LAN and Cellular radios as mobile gateways and connect to the outside world. We aim to study the feasibility of such global connectivity from the road through simulation of the underlying connectivity characteristics for varying traffic and gateway densities. The connectivity results suggest that each vehicle should be able to connect to at least one gateway for a majority of time. The average path lifetimes are found to be good enough many traditional Internet applications like FTP and HTTP. The effectiveness of the AODV wireless ad-hoc routing protocol over this scenario is evaluated and shown to perform well for the densities considered. However, the routes created by AODV can break very frequently due to the dynamic nature of mobility involved. We introduce a couple of prediction based routing protocols to minimize these route breakages and thus improve performance. These protocols take advantage of some deterministic characteristics of the mobility model to better predict route breakages and take preemptive action."
]
}
|
1209.0684
|
96948785
|
We propose to use Vehicular ad hoc networks (VANET) as the infrastructure for an urban cyber-physical system for gathering up-to-date data about a city, like traffic conditions or environmental parameters. In this context, it is critical to design a data collection protocol that enables retrieving the data from the vehicles in almost real-time in an efficient way for urban scenarios. We propose Back off-based Per-hop Forwarding (BPF), a broadcast-based receiver-oriented protocol that uses the destination location information to select the forwarding order among the nodes receiving the packet. BFP does not require nodes to exchange periodic messages with their neighbors communicating their locations to keep a low management message overhead. It uses geographic information about the final destination node in the header of each data packet to route it in a hop-by-hop basis. It takes advantage of redundant forwarding to increase packet delivery to a destination, what is more critical in an urban scenario than in a highway, where the road topology does not represent a challenge for forwarding. We evaluate the performance of the BPF protocol using ns-3 and a Manhattan grid topology and compare it with well-known broadcast suppression techniques. Our results show that BPF achieves significantly higher packet delivery rates at a reduced redundancy cost.
|
One of the well-known protocols in this category is the greedy routing protocol @cite_3 that always forwards the packet to the closest node to the destination by exchanging hello message to gain information about its neighbors. Greedy Perimeter Stateless Routing (GPSR) @cite_8 consists of two different forwarding methods: greedy forwarding and perimeter forwarding. In this method, a beaconing algorithm is used for determining the neighbor position. @cite_22 , the authors showed that geographical protocol like GPSR achieves better performance compared to DSR protocol. @cite_15 proposed Geographic Source Routing (GSR) which uses the city digital map to get the destination position. By combining the geographical routing and knowledge of the city map, GSR has better average delivery rate, smaller total bandwidth consumption and similar latency of first delivered packet than DSR and AODV in urban area. However, the per-packet computation overhead is very high.
|
{
"cite_N": [
"@cite_15",
"@cite_22",
"@cite_3",
"@cite_8"
],
"mid": [
"2068691410",
"36639305",
"2151185863",
"2101963262"
],
"abstract": [
"Position-based routing, as it is used by protocols like Greedy Perimeter Stateless Routing (GPSR) [5], is very well suited for highly dynamic environments such as inter-vehicle communication on highways. However, it has been discussed that radio obstacles [4], as they are found in urban areas, have a significant negative impact on the performance of position-based routing. In prior work [6] we presented a position-based approach which alleviates this problem and is able to find robust routes within city environments. It is related to the idea of position-based source routing as proposed in [1] for terminode routing. The algorithm needs global knowledge of the city topology as it is provided by a static street map. Given this information the sender determines the junctions that have to be traversed by the packet using the Dijkstra shortest path algorithm. Forwarding between junctions is then done in a position-based fashion. In this short paper we show how position-based routing can be aplied to a city scenario without assuming that nodes have access to a static street map and without using source routing.",
"",
"Vehicular ad hoc networks (VANETs) allow vehicles to form a self-organized network without the need for permanent infrastructure. As a prerequisite to communication, an efficient route between network nodes must be established, and it must adapt to the rapidly changing topology of vehicles in motion. This is the aim of VANET routing protocols. In this paper, we discuss design factors of VANET routing protocols and present a timeline of the development of the existing greedy routing protocols. Moreover, we classify and characterize the existing greedy routing protocols for VANETs and also provide a qualitative comparison of them.",
"We present Greedy Perimeter Stateless Routing (GPSR), a novel routing protocol for wireless datagram networks that uses the positions of routers and a packet's destination to make packet forwarding decisions. GPSR makes greedy forwarding decisions using only information about a router's immediate neighbors in the network topology. When a packet reaches a region where greedy forwarding is impossible, the algorithm recovers by routing around the perimeter of the region. By keeping state only about the local topology, GPSR scales better in per-router state than shortest-path and ad-hoc routing protocols as the number of network destinations increases. Under mobility's frequent topology changes, GPSR can use local topology information to find correct new routes quickly. We describe the GPSR protocol, and use extensive simulation of mobile wireless networks to compare its performance with that of Dynamic Source Routing. Our simulations demonstrate GPSR's scalability on densely deployed wireless networks."
]
}
|
1209.0684
|
96948785
|
We propose to use Vehicular ad hoc networks (VANET) as the infrastructure for an urban cyber-physical system for gathering up-to-date data about a city, like traffic conditions or environmental parameters. In this context, it is critical to design a data collection protocol that enables retrieving the data from the vehicles in almost real-time in an efficient way for urban scenarios. We propose Back off-based Per-hop Forwarding (BPF), a broadcast-based receiver-oriented protocol that uses the destination location information to select the forwarding order among the nodes receiving the packet. BFP does not require nodes to exchange periodic messages with their neighbors communicating their locations to keep a low management message overhead. It uses geographic information about the final destination node in the header of each data packet to route it in a hop-by-hop basis. It takes advantage of redundant forwarding to increase packet delivery to a destination, what is more critical in an urban scenario than in a highway, where the road topology does not represent a challenge for forwarding. We evaluate the performance of the BPF protocol using ns-3 and a Manhattan grid topology and compare it with well-known broadcast suppression techniques. Our results show that BPF achieves significantly higher packet delivery rates at a reduced redundancy cost.
|
DV-CAST @cite_5 uses sender-oriented forwarding and has three major components: neighbor detection, broadcast suppression and store-carry-forward mechanism. It uses hello messages to estimate the network topology and GPS information to determine the direction of vehicles for broadcasting the data, reducing protocol overhead and complexity. Simulation results show that DV-CAST performs well in heavy traffic during rush hours and very light traffic during certain hours of the day and also is robust against various extreme traffic conditions but still need prior hello messages.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2141239330"
],
"abstract": [
"The potential of infrastructureless vehicular ad hoc networks for providing safety and nonsafety applications is quite significant. The topology of VANETs in urban, suburban, and rural areas can exhibit fully connected, fully disconnected, or sparsely connected behavior, depending on the time of day or the market penetration rate of wireless communication devices. In this article we focus on highway scenarios, and present the design and implementation of a new distributed vehicular multihop broadcast protocol, that can operate in all traffic regimes, including extreme scenarios such as dense and sparse traffic regimes. DV-CAST is a distributed broadcast protocol that relies only on local topology information for handling broadcast messages in VANETs. It is shown that the performance of the proposed DV-CAST protocol in terms of reliability, efficiency, and scalability is excellent."
]
}
|
1209.0684
|
96948785
|
We propose to use Vehicular ad hoc networks (VANET) as the infrastructure for an urban cyber-physical system for gathering up-to-date data about a city, like traffic conditions or environmental parameters. In this context, it is critical to design a data collection protocol that enables retrieving the data from the vehicles in almost real-time in an efficient way for urban scenarios. We propose Back off-based Per-hop Forwarding (BPF), a broadcast-based receiver-oriented protocol that uses the destination location information to select the forwarding order among the nodes receiving the packet. BFP does not require nodes to exchange periodic messages with their neighbors communicating their locations to keep a low management message overhead. It uses geographic information about the final destination node in the header of each data packet to route it in a hop-by-hop basis. It takes advantage of redundant forwarding to increase packet delivery to a destination, what is more critical in an urban scenario than in a highway, where the road topology does not represent a challenge for forwarding. We evaluate the performance of the BPF protocol using ns-3 and a Manhattan grid topology and compare it with well-known broadcast suppression techniques. Our results show that BPF achieves significantly higher packet delivery rates at a reduced redundancy cost.
|
Finally, we introduce in more detail 3 broadcast-based protocols that use basic per-hop forwarding and suppression techniques to mitigate broadcast storms: Weighted p-Persistence, Slotted 1-Persistence and Slotted p-Persistence broadcasting @cite_20 . We shall compare the performance of the proposed protocol against these protocols because they follow a similar approach of not requiring the exchange of neighbor information. In weighted p-persistence forwarding, each node @math , upon receiving a packet from node @math , verifies the packet ID and re-broadcasts the packet with probability @math if it receives the packets for the first time, otherwise it discards the packets. The probability of broadcasting is calculated from the distance between nodes @math and @math ( @math ) relative to the average communication range ( @math ):
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"2115009661"
],
"abstract": [
"Several multihop applications developed for vehicular ad hoc networks use broadcast as a means to either discover nearby neighbors or propagate useful traffic information to other vehicles located within a certain geographical area. However, the conventional broadcast mechanism may lead to the so-called broadcast storm problem, a scenario in which there is a high level of contention and collisions at the link layer due to an excessive number of broadcast packets. While this is a well-known problem in mobile ad hoc wireless networks, only a few studies have addressed this issue in the VANET context, where mobile hosts move along the roads in a certain limited set of directions as opposed to randomly moving in arbitrary directions within a bounded area. Unlike other existing works, we quantify the impact of broadcast storms in VANETs in terms of message delay and packet loss rate in addition to conventional metrics such as message reachability and overhead. Given that VANET applications are currently confined to using the DSRC protocol at the data link layer, we propose three probabilistic and timer-based broadcast suppression techniques: weighted p-persistence, slotted 1-persistence, and slotted p-persistence schemes, to be used at the network layer. Our simulation results show that the proposed schemes can significantly reduce contention at the MAC layer by achieving up to 70 percent reduction in packet loss rate while keeping end-to-end delay at acceptable levels for most VANET applications."
]
}
|
1208.6444
|
2949824909
|
Multiscale and multiphysics applications are now commonplace, and many researchers focus on combining existing models to construct combined multiscale models. Here we present a concise review of multiscale applications and their source communities. We investigate the prevalence of multiscale projects in the EU and the US, review a range of coupling toolkits they use to construct multiscale models and identify areas where collaboration between disciplines could be particularly beneficial. We conclude that multiscale computing has become increasingly popular in recent years, that different communities adopt very different approaches to constructing multiscale simulations, and that simulations on a length scale of a few metres and a time scale of a few hours can be found in many of the multiscale research domains. Communities may receive additional benefit from sharing methods that are geared towards these scales.
|
Aside from numerous publications, project websites and domain-specific reviews, we have identified a few sources which provide information on multiscale simulations in various scientific domains. One such source of information is the Journal of Multiscale Modeling and Simulation (epubs.siam.org mms), which defines itself as an interdisciplinary journal focusing on the fundamental modeling and computational principles underlying various multiscale methods. The Journal of Multiscale Modeling (www.worldscinet.com jmm ) is also targeted at multiscale modeling in general. There are also several books which present multiscale research in a range of domains @cite_17 @cite_12 , as well as dozens of multiscale modeling workshops such as the Multiscale Materials Meeting (www.mrs.org.sg mmm2012) or the Modelling and Computing Multiscale Systems workshop (www.computationalscience.nl MCMS2013). There are several articles which focus on the theoretical aspects of multiscale modelling across domains. @cite_3 present a thorough and systematic review of the computational and (especially) the conceptual toolkits for multiscale modelling. In addition, @cite_13 investigate the modeling aspects of multiscale simulations, emphasizing simulations using Cellular Automata.
|
{
"cite_N": [
"@cite_3",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2074548003",
"192734418",
"2169038227",
"421315060"
],
"abstract": [
"Abstract Analytical methods for studying tsunami run-up become untenable when the tsunami wave runs on beaches. In this study, we use a finite-element procedure that includes the interaction between solid and fluid based on the potential flow theory to simulate the dynamics of tsunami wave induced by a thrust fault earthquake in order to investigate the effect of different beach slopes on the tsunami run-up. The simulated run-up shows significantly differences from that predicted by the analytical solution. The maximum run-up shows a negative linear relationship with the square root of the cotangent of the beach slope, similar to the result of the analytical solution using a solitary wave as the incident wave, but with different slopes and amplitudes.",
"Cellular Automata (CA) are generally acknowledged to be a powerful way to describe and model natural phenomena [1–3]. There are even tempting claims that nature itself is one big (quantum) information processing system, e.g. [4], and that CA may actually be nature’s way to do this processing [5–7]. We will not embark on this philosophical road, but ask ourselves a more mundane question. Can we use CA to model the inherently multi-scale processes in nature and use these models for efficient simulations on digital computers?",
"Multiscale modeling is required for linking physiological processes operating at the organ and tissue levels to signal transduction networks and other subcellular processes. Several XML markup languages, including CellML, have been developed to encode models and to facilitate the building of model repositories and general purpose software tools. Progress in this area is described and illustrated with reference to the heart Physiome Project which aims to understand cardiac arrhythmias in terms of structure-function relations from proteins up to cells, tissues and organs.",
"Small scale features and processes occurring at a nanometer and femtoseconds scales have a profound impact on what happens at a larger scale and over extensive period of time. The primary objective of this volume is to reflect the-state-of-the art in multiscale mathematics, modeling and simulations and to address the following barriers: What is the information that needs to be transferred from one model or scale to another and what physical principles must be satisfied during the transfer of information? What are the optimal ways to achieve such transfer of information? How to quantify variability of physical parameters at multiple scales and how to account for it to ensure design robustness? Various multiscale approaches in space and time presented in this Volume are grouped into two main categories: information-passing and concurrent. In the concurrent approaches, various scales are simultaneously resolved, whereas in the information-passing methods, the fine scale is modeled and its gross response is infused into the continuum scale. The issue of reliability of multiscale modeling and simulation tools is discussed in several, which focus on hierarchy of multiscale models and a posterior model error estimation including uncertainty quantification. Component software that can be effectively combined to address a wide range of multiscale simulations is described as well. Applications range from advanced materials, to nanoelectromechanical systems (NEMS), to biological systems, and nanoporous catalysts where physical phenomena operate across 12 orders of magnitude in time scales and 10 orders of magnitude in spatial scales. A valuable reference book for scientists, engineers and graduate students practicing in traditional engineering and science disciplines as well as in emerging fields of nanotechnology, biotechnology, microelectronics and energy."
]
}
|
1208.6067
|
2952547838
|
Many robotic systems deal with uncertainty by performing a sequence of information gathering actions. In this work, we focus on the problem of efficiently constructing such a sequence by drawing an explicit connection to submodularity. Ideally, we would like a method that finds the optimal sequence, taking the minimum amount of time while providing sufficient information. Finding this sequence, however, is generally intractable. As a result, many well-established methods select actions greedily. Surprisingly, this often performs well. Our work first explains this high performance -- we note a commonly used metric, reduction of Shannon entropy, is submodular under certain assumptions, rendering the greedy solution comparable to the optimal plan in the offline setting. However, reacting online to observations can increase performance. Recently developed notions of adaptive submodularity provide guarantees for a greedy algorithm in this online setting. In this work, we develop new methods based on adaptive submodularity for selecting a sequence of information gathering actions online. In addition to providing guarantees, we can capitalize on submodularity to attain additional computational speedups. We demonstrate the effectiveness of these methods in simulation and on a robot.
|
@cite_28 @cite_3 select a sequence of uncertainty reducing tactile actions through forward search in a POMDP. Possible actions consist of pre-specified world-relative trajectories @cite_28 , motions based on the current highest probability state. Actions are selected using either information gain or probability of success as a metric @cite_29 , with a forward search depth of up to three actions. Aggressive pruning and clustering of observations makes online selection tractable. While Hsiao considers a small, focused set of actions (typically ) at a greater depth, we consider a broad set of actions (typically ) at a search depth of one action.
|
{
"cite_N": [
"@cite_28",
"@cite_29",
"@cite_3"
],
"mid": [
"37733164",
"2226137842",
""
],
"abstract": [
"We describe a simple approach for executing manipulation programs in the presence of significant, but bounded, uncertainty. The key idea is to maintain a belief-state (a probability distribution over world states) and to execute fixed trajectories relative to the most-likely state of the world. These world-relative trajectories, as well as the transition and observation models needed for belief update, are all constructed off-line, so the approach does not require any on-line motion planning.",
"In this paper, we present an approach for robustly grasping objects under positional uncertainty. We maintain a belief state (a probability distribution over world states), model the problem as a partially observable Markov decision process (POMDP), and select actions with a receding horizon using forward search through the belief space. Our actions are world-relative trajectories, or fixed trajectories expressed relative to the most-likely state of the world. We localize the object, ensure its reachability, and robustly grasp it at a goal position by using information-gathering, reorientation, and goal actions. We choose among candidate actions in a tractable way online by computing and storing the observation models needed for belief update offline. This framework is used to successfully grasp objects (including a powerdrill and a Brita pitcher) despite significant uncertainty, both in simulation and with an actual robot arm.",
""
]
}
|
1208.6067
|
2952547838
|
Many robotic systems deal with uncertainty by performing a sequence of information gathering actions. In this work, we focus on the problem of efficiently constructing such a sequence by drawing an explicit connection to submodularity. Ideally, we would like a method that finds the optimal sequence, taking the minimum amount of time while providing sufficient information. Finding this sequence, however, is generally intractable. As a result, many well-established methods select actions greedily. Surprisingly, this often performs well. Our work first explains this high performance -- we note a commonly used metric, reduction of Shannon entropy, is submodular under certain assumptions, rendering the greedy solution comparable to the optimal plan in the offline setting. However, reacting online to observations can increase performance. Recently developed notions of adaptive submodularity provide guarantees for a greedy algorithm in this online setting. In this work, we develop new methods based on adaptive submodularity for selecting a sequence of information gathering actions online. In addition to providing guarantees, we can capitalize on submodularity to attain additional computational speedups. We demonstrate the effectiveness of these methods in simulation and on a robot.
|
@cite_9 consider the problem of full 6DOF pose estimation of objects through tactile feedback. Their primary contribution is an algorithm capable of running in the full 6DOF space quickly. In their experiments, action selection was done randomly, as they do not attempt to select optimal actions. To achieve an error of @math , they needed an average of 29 actions for objects with complicated meshes. While this does show that even random actions achieve localization eventually, we note that our methods take significantly fewer actions.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2098433547"
],
"abstract": [
"Researchers have addressed the localization problem for mobile robots using many different kinds of sensors, including rangefinders, cameras, and odometers. In this paper, we consider localization using a robot that is virtually \"blind\", having only a clock and contact sensor at its disposal. This represents a drastic reduction in sensing requirements, even in light of existing work that considers localization with limited sensing. We present probabilistic techniques that represent and update the robot's position uncertainty and algorithms to reduce this uncertainty. We demonstrate the experimental effectiveness of these methods using a Roomba autonomous vacuum cleaner robot in laboratory environments."
]
}
|
1208.6067
|
2952547838
|
Many robotic systems deal with uncertainty by performing a sequence of information gathering actions. In this work, we focus on the problem of efficiently constructing such a sequence by drawing an explicit connection to submodularity. Ideally, we would like a method that finds the optimal sequence, taking the minimum amount of time while providing sufficient information. Finding this sequence, however, is generally intractable. As a result, many well-established methods select actions greedily. Surprisingly, this often performs well. Our work first explains this high performance -- we note a commonly used metric, reduction of Shannon entropy, is submodular under certain assumptions, rendering the greedy solution comparable to the optimal plan in the offline setting. However, reacting online to observations can increase performance. Recently developed notions of adaptive submodularity provide guarantees for a greedy algorithm in this online setting. In this work, we develop new methods based on adaptive submodularity for selecting a sequence of information gathering actions online. In addition to providing guarantees, we can capitalize on submodularity to attain additional computational speedups. We demonstrate the effectiveness of these methods in simulation and on a robot.
|
Dogar and Srinivasa @cite_26 use the natural interaction of an end effector and an object to handle uncertainty with a push-grasp. By utilizing offline simulation, they reduce the online problem to enclosing the object's uncertainty in a pre-computed capture region. Online, they simply plan a push-grasp which encloses the uncertainty inside the capture region. This work is complimentary to ours - the push-grasp works well on objects which slide easily, while we assume objects do not move. We believe each approach is applicable in different scenarios.
|
{
"cite_N": [
"@cite_26"
],
"mid": [
"2128082316"
],
"abstract": [
"We add to a manipulator's capabilities a new primitive motion which we term a push-grasp. While significant progress has been made in robotic grasping of objects and geometric path planning for manipulation, such work treats the world and the object being grasped as immovable, often declaring failure when simple motions of the object could produce success. We analyze the mechanics of push-grasping and present a quasi-static tool that can be used both for analysis and simulation. We utilize this analysis to derive a fast, feasible motion planning algorithm that produces stable pushgrasp plans for dexterous hands in the presence of object pose uncertainty and high clutter. We demonstrate our algorithm extensively in simulation and on HERB, a personal robotics platform developed at Intel Labs Pittsburgh."
]
}
|
1208.6067
|
2952547838
|
Many robotic systems deal with uncertainty by performing a sequence of information gathering actions. In this work, we focus on the problem of efficiently constructing such a sequence by drawing an explicit connection to submodularity. Ideally, we would like a method that finds the optimal sequence, taking the minimum amount of time while providing sufficient information. Finding this sequence, however, is generally intractable. As a result, many well-established methods select actions greedily. Surprisingly, this often performs well. Our work first explains this high performance -- we note a commonly used metric, reduction of Shannon entropy, is submodular under certain assumptions, rendering the greedy solution comparable to the optimal plan in the offline setting. However, reacting online to observations can increase performance. Recently developed notions of adaptive submodularity provide guarantees for a greedy algorithm in this online setting. In this work, we develop new methods based on adaptive submodularity for selecting a sequence of information gathering actions online. In addition to providing guarantees, we can capitalize on submodularity to attain additional computational speedups. We demonstrate the effectiveness of these methods in simulation and on a robot.
|
Outside of robotics, many have addressed the problem of query selection for identification. In the setting, a simple adaptive algorithm known as generalized binary search (GBS) @cite_7 is provably near optimal. Interestingly, this algorithm selects queries identical to greedy information gain if there are only two outcomes @cite_12 . The GBS method was extended to multiple outcomes, and shown to be adaptive submodular @cite_21 . Our Hypothesis Pruning metric is similar to this formulation, but with different action and observation spaces that enable us to model touch actions naturally.
|
{
"cite_N": [
"@cite_21",
"@cite_12",
"@cite_7"
],
"mid": [
"2962795549",
"1528519380",
"2543543809"
],
"abstract": [
"Many problems in artificial intelligence require adaptively making a sequence of decisions with uncertain outcomes under partial observability. Solving such stochastic optimization problems is a fundamental but notoriously difficult challenge. In this paper, we introduce the concept of adaptive submodularity, generalizing submodular set functions to adaptive policies. We prove that if a problem satisfies this property, a simple adaptive greedy algorithm is guaranteed to be competitive with the optimal policy. In addition to providing performance guarantees for both stochastic maximization and coverage, adaptive submodularity can be exploited to drastically speed up the greedy algorithm by using lazy evaluations. We illustrate the usefulness of the concept by giving several examples of adaptive submodular objectives arising in diverse AI applications including management of sensing resources, viral marketing and active learning. Proving adaptive submodularity for these problems allows us to recover existing results in these applications as special cases, improve approximation guarantees and handle natural generalizations.",
"We consider the problem of diagnosing faults in a system represented by a Bayesian network, where diagnosis corresponds to recovering the most likely state of unobserved nodes given the outcomes of tests (observed nodes). Finding an optimal subset of tests in this setting is intractable in general. We show that it is difficult even to compute the next most-informative test using greedy test selection, as it involves several entropy terms whose exact computation is intractable. We propose an approximate approach that utilizes the loopy belief propagation infrastructure to simultaneously compute approximations of marginal and conditional entropies on multiple subsets of nodes. We apply our method to fault diagnosis in computer networks, and show the algorithm to be very effective on realistic Internet-like topologies. We also provide theoretical justification for the greedy test selection approach, along with some performance guarantees.",
"This paper studies a generalization of the classic binary search problem of locating a desired value within a sorted list. The classic problem can be viewed as determining the correct one-dimensional, binary-valued threshold function from a finite class of such functions based on queries taking the form of point samples of the function. The classic problem is also equivalent to a simple binary encoding of the threshold location. This paper extends binary search to learning more general binary-valued functions. Specifically, if the set of target functions and queries satisfy certain geometrical relationships, then an algorithm, based on selecting a query that is maximally discriminating at each step, will determine the correct function in a number of steps that is logarithmic in the number of functions under consideration. Examples of classes satisfying the geometrical relationships include linear separators in multiple dimensions. Extensions to handle noise are also discussed. Possible applications include machine learning, channel coding, and sequential experimental design."
]
}
|
1208.6067
|
2952547838
|
Many robotic systems deal with uncertainty by performing a sequence of information gathering actions. In this work, we focus on the problem of efficiently constructing such a sequence by drawing an explicit connection to submodularity. Ideally, we would like a method that finds the optimal sequence, taking the minimum amount of time while providing sufficient information. Finding this sequence, however, is generally intractable. As a result, many well-established methods select actions greedily. Surprisingly, this often performs well. Our work first explains this high performance -- we note a commonly used metric, reduction of Shannon entropy, is submodular under certain assumptions, rendering the greedy solution comparable to the optimal plan in the offline setting. However, reacting online to observations can increase performance. Recently developed notions of adaptive submodularity provide guarantees for a greedy algorithm in this online setting. In this work, we develop new methods based on adaptive submodularity for selecting a sequence of information gathering actions online. In addition to providing guarantees, we can capitalize on submodularity to attain additional computational speedups. We demonstrate the effectiveness of these methods in simulation and on a robot.
|
Recently, there have been guarantees made for the case of observations. For binary outcomes and independent, random noise, the GBS was extended to noisy generalized binary search @cite_17 . For cases of persistent noise, where performing the same action results in the same noisy outcome, adaptive submodular formulations have been developed based on eliminating noisy versions of each hypothesis @cite_36 @cite_14 . In all of these cases, the message is the same - with the right formulation, greedy selection performs well for uncertainty reduction.
|
{
"cite_N": [
"@cite_36",
"@cite_14",
"@cite_17"
],
"mid": [
"2143203060",
"",
"2135496842"
],
"abstract": [
"We tackle the fundamental problem of Bayesian active learning with noise, where we need to adaptively select from a number of expensive tests in order to identify an unknown hypothesis sampled from a known prior distribution. In the case of noise-free observations, a greedy algorithm called generalized binary search (GBS) is known to perform near-optimally. We show that if the observations are noisy, perhaps surprisingly, GBS can perform very poorly. We develop EC2, a novel, greedy active learning algorithm and prove that it is competitive with the optimal policy, thus obtaining the first competitiveness guarantees for Bayesian active learning with noisy observations. Our bounds rely on a recently discovered diminishing returns property called adaptive submodularity, generalizing the classical notion of submodular set functions to adaptive policies. Our results hold even if the tests have non-uniform cost and their noise is correlated. We also propose EFFECX-TIVE, a particularly fast approximation of EC2, and evaluate it on a Bayesian experimental design problem involving human subjects, intended to tease apart competing economic theories of how people make decisions under uncertainty.",
"",
"This paper addresses the problem of noisy Generalized Binary Search (GBS). GBS is a well-known greedy algorithm for determining a binary-valued hypothesis through a sequence of strategically selected queries. At each step, a query is selected that most evenly splits the hypotheses under consideration into two disjoint subsets, a natural generalization of the idea underlying classic binary search. GBS is used in many applications, including fault testing, machine diagnostics, disease diagnosis, job scheduling, image processing, computer vision, and active learning. In most of these cases, the responses to queries can be noisy. Past work has provided a partial characterization of GBS, but existing noise-tolerant versions of GBS are suboptimal in terms of query complexity. This paper presents an optimal algorithm for noisy GBS and demonstrates its application to learning multidimensional threshold functions."
]
}
|
1208.6125
|
2951854266
|
Efficient communication in wireless networks is typically challenged by the possibility of interference among several transmitting nodes. Much important research has been invested in decreasing the number of collisions in order to obtain faster algorithms for communication in such networks. This paper proposes a novel approach for wireless communication, which embraces collisions rather than avoiding them, over an additive channel. It introduces a coding technique called Bounded-Contention Coding (BCC) that allows collisions to be successfully decoded by the receiving nodes into the original transmissions and whose complexity depends on a bound on the contention among the transmitters. BCC enables deterministic local broadcast in a network with n nodes and at most a transmitters with information of l bits each within O(a log n + al) bits of communication with full-duplex radios, and O((a log n + al)(log n)) bits, with high probability, with half-duplex radios. When combined with random linear network coding, BCC gives global broadcast within O((D + a + log n)(a log n + l)) bits, with high probability. This also holds in dynamic networks that can change arbitrarily over time by a worst-case adversary. When no bound on the contention is given, it is shown how to probabilistically estimate it and obtain global broadcast that is adaptive to the true contention in the network.
|
The finite-field additive radio network model of communication considered in this paper, where collisions result in an addition, over a finite field, of the transmitted signals, was previously studied in @cite_6 @cite_30 , where the main attention was towards the capacity of the network, i.e., the amount of information that can be reliably transmitted in the network. While the proof of the validity of the approximation @cite_6 is subtle, the intuition behind this work can be readily gleaned from a simple observation of the Cover-Wyner multiple access channel capacity region. Under high SNR regimes, the pentagon of the Cover-Wyner region can, in the limit, be decomposed into a rectangle, appended to a right isosceles triangle @cite_30 . The square can be interpreted as the communication region given by the bits that do not interfere. Such bits do not require special attention. In the case where the SNRs at the receiver for the different users are the same, this rectangle vanishes. The triangular region is the same capacity region as for noise-free additive multiple access channel in a finite field @cite_29 , leading naturally to an additive model over a finite field.
|
{
"cite_N": [
"@cite_30",
"@cite_29",
"@cite_6"
],
"mid": [
"2952528012",
"1857078381",
""
],
"abstract": [
"The capacity of multiuser networks has been a long-standing problem in information theory. Recently, have proposed a deterministic network model to approximate multiuser wireless networks. This model, known as the ADT network model, takes into account the broadcast nature of wireless medium and interference. We show that the ADT network model can be described within the algebraic network coding framework introduced by Koetter and Medard. We prove that the ADT network problem can be captured by a single matrix, and show that the min-cut of an ADT network is the rank of this matrix; thus, eliminating the need to optimize over exponential number of cuts between two nodes to compute the min-cut of an ADT network. We extend the capacity characterization for ADT networks to a more general set of connections, including single unicast multicast connection and non-multicast connections such as multiple multicast, disjoint multicast, and two-level multicast. We also provide sufficiency conditions for achievability in ADT networks for any general connection set. In addition, we show that random linear network coding, a randomized distributed algorithm for network code construction, achieves the capacity for the connections listed above. Furthermore, we extend the ADT networks to those with random erasures and cycles (thus, allowing bi-directional links). In addition, we propose an efficient linear code construction for the deterministic wireless multicast relay network model. 's proposed code construction is not guaranteed to be efficient and may potentially involve an infinite block length. Unlike several previous coding schemes, we do not attempt to find flows in the network. Instead, for a layered network, we maintain an invariant where it is required that at each stage of the code construction, certain sets of codewords are linearly independent.",
"We examine the issue of separation and code design for network data transmission environments. We demonstrate that source-channel sep-aration holds for several canonical network channel models when the whole network operates over a common finite field. Our approach uses linear codes. This simple, unifying framework allows us to re-establish with economy the optimality of linear codes for single transmitter channels and for Slepian-Wolf source coding. It also enables us to establish the optimality of linear codes for multiple access channels and for erasure broadcast channels. Moreover, we show that source-channel separation holds for these networks. This robustness of separation we show to be strongly predicated on the fact that noise and inputs are independent. The linearity of source, channel, and network coding blurs the delineation between these codes, and thus we explore joint linear de-sign. Finally, we illustrate the fact that design for individual network modules may yield poor results when such modules are concatenated, demonstrating that end-to-end coding is necessary. Thus, we argue, it is the lack of decomposability into canonical network modules, rather than the lack of separation between source and channel coding, that presents major challenges for coding in networks.",
""
]
}
|
1208.6125
|
2951854266
|
Efficient communication in wireless networks is typically challenged by the possibility of interference among several transmitting nodes. Much important research has been invested in decreasing the number of collisions in order to obtain faster algorithms for communication in such networks. This paper proposes a novel approach for wireless communication, which embraces collisions rather than avoiding them, over an additive channel. It introduces a coding technique called Bounded-Contention Coding (BCC) that allows collisions to be successfully decoded by the receiving nodes into the original transmissions and whose complexity depends on a bound on the contention among the transmitters. BCC enables deterministic local broadcast in a network with n nodes and at most a transmitters with information of l bits each within O(a log n + al) bits of communication with full-duplex radios, and O((a log n + al)(log n)) bits, with high probability, with half-duplex radios. When combined with random linear network coding, BCC gives global broadcast within O((D + a + log n)(a log n + l)) bits, with high probability. This also holds in dynamic networks that can change arbitrarily over time by a worst-case adversary. When no bound on the contention is given, it is shown how to probabilistically estimate it and obtain global broadcast that is adaptive to the true contention in the network.
|
In the setting of a wireless network, deterministic global broadcast of a single message was studied in @cite_24 @cite_10 @cite_25 , the best results given being @math and @math , where @math is the diameter of the network. Bar- @cite_15 were the first to study randomized global broadcast algorithms. Kowalski and Pelc @cite_24 and Czumaj and Rytter @cite_25 presented randomized solutions based on selecting sequences, with complexities of @math . These algorithms match lower bounds of @cite_3 @cite_36 but in a model that is weaker than the one addressed in this paper. The algorithms mentioned above are all for global broadcast of one message from a known source. For multiple messages, a deterministic algorithm for @math messages with complexity @math appears in @cite_22 , while randomized global broadcast of multiple messages was studied in @cite_37 @cite_31 @cite_1 . We refer the reader to an excellent survey on broadcasting in radio networks in @cite_12 .
|
{
"cite_N": [
"@cite_37",
"@cite_31",
"@cite_22",
"@cite_36",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_15",
"@cite_10",
"@cite_25",
"@cite_12"
],
"mid": [
"2041120044",
"2161281242",
"78560857",
"2095713941",
"",
"",
"2048565150",
"2142458954",
"1983915440",
"2160268831",
""
],
"abstract": [
"Two tasks of communication in a multi-hop synchronous radio network are considered: point-to-point communication and broadcast (sending a message to all nodes of a network). Efficient protocols for both problems are presented. Even though the protocols are probabilistic, it is shown how to acknowledge messages deterministically. Let n, D, and ∆ be the number of nodes, the diameter and the maximum degree of our network, respectively. Both protocols require a setup phase in which a BFS tree is constructed. This phase takes O ((n + Dlogn)log∆) time. After the setup, k point-to-point transmissions require O ((k +D)log∆) time on the average. Therefore the network allows a new transmission every O (log∆) time slots. Also, k broadcasts require an average of O ((k +D)log∆logn) time. Hence the average throughput of the network is a broadcast every O(log∆logn) time slots. Both protocols pipeline the messages along the BFS tree. They are always successful on the graph spanned by the BFS tree. Their probabilistic behavior refers only to the running time. Using the above protocols the ranking problem is solved in O (nlognlog∆) time. The performance analysis of both protocols constitutes a new application of queueing theory.",
"In much of the theoretical literature on wireless algorithms, issues of message dissemination are considered together with issues of contention management. This combination leads to complicated algorithms and analysis, and makes it difficult to extend the work to harder communication problems. In this paper, we present results of a current project aimed at simplifying such algorithms and analysis by decomposing the treatment into two levels, using abstract \"MAC layer\" specifications to encapsulate the contention management. We use two different abstract MAC layers: the basic one of [14, 15] and a new probabilistic layer. We first present a typical randomized contention-manageent algorithm for a standard graph-based radio network model We show that it implements both abstract MAC layers. We combine this algorithm with greedy algorithms for single-message and multi-message global broadcast and analyze the combination, using both abstract MAC layers as intermediate layers. Using the basic MAC layer, we prove a bound of O(D log(n ∈) log Δ) for the time to deliver a single message everywhere with probability 1 -- ∈, where D is the network diameter, n is the number of nodes, and Δ is the maximum node degree. Using the probabilistic layer, we prove a bound of O((D + log(n ∈)) log Δ), which matches the best previously-known bound for single-message broadcast over the physical network model. For multi-message broadcast, we obtain bounds of O((D + kΔ) log(n ∈) log Δ) using the basic layer and O((D + kΔ log(n ∈)) log Δ) using the probabilistic layer, for the time to deliver a message everywhere in the presence of at most k concurrent messages.",
"We present new distributed deterministic solutions to two communication problems in n-node ad-hoc radio networks: rumor gathering and multi-broadcast. In these problems, some or all nodes of the network initially contain input data called rumors, which have to be learned by other nodes. In rumor gathering, there are k rumors initially distributed arbitrarily among the nodes, and the goal is to collect all the rumors at one node. Our rumor gathering algorithm works in O((k + n) log n) time and our multi-broadcast algorithm works in O(k log3 n + n log4 n) time, for any n-node networks and k rumors (with arbitrary k), which is a substantial improvement over the best previously known deterministic solutions to these problems. As a consequence, we exponentially decrease the gap between upper and lower bounds on the deterministic time complexity of four communication problems: rumor gathering, multi-broadcast, gossiping and routing, in the important case when every node has initially at most one rumor (this is the scenario for gossiping and for the usual formulation of routing). Indeed, for k = O(n), our results simultaneously decrease the complexity gaps for these four problems from polynomial to polylogarithmic in the size of the graph. Moreover, our deterministic gathering algorithm applied for k = O(n) rumors, improves over the best previously known randomized algorithm of time O(k log n + n log2 n).",
"A radio network is a synchronous network of processors that communicate by transmitting messages to their neighbors, where a processor receives a message in a given step if and only if it is silent in this step and precisely one of its neighbors transmits. In this paper we prove the existence of a family of radius-2 networks on n vertices for which any broadcast schedule requires at least Omega((log n log log n)2) rounds of transmissions. This almost matches an upper bound of O(log2 n) rounds for networks of radius 2 proved earlier by Bar-Yehuda, Goldreich, and Itai.",
"",
"",
"We consider distributed broadcasting in radio networks, modeled as undirected graphs, whose nodes have no information on the topology of the network, nor even on their immediate neighborhood. For randomized broadcasting, we give an algorithm working in expected time O(D log(n D) + log2 n) in n-node radio networks of diameter D, which is optimal, as it matches the lower bounds of [1] and Kushilevitz and Mansour [14]. Our algorithm improves the best previously known randomized broadcasting algorithm of Bar-Yehuda, Goldreich and Itai [3], running in expected time O(D log n + log2 n). For deterministic broadcasting, we show the lower bound Ω(n(log n) (log (n D)))) on broadcasting time in n-node radio networks of diameter D. This implies previously known lower bounds of Bar-Yehuda, Goldreich and Itai [3] and Bruschi and Del Pinto [5], and is sharper than any of them in many cases. We also give an algorithm working in time O(n log n), thus shrinking -- for the first time -- the gap between the upper and the lower bound on deterministic broadcasting time to a logarithmic factor.",
"The time-complexity of deterministic and randomized protocols for achieving broadcast (distributing a message from a source to all other nodes) in arbitrary multi-hop radio networks is investigated. In many such networks, communication takes place in synchronous time-slots. A processor receives a message at a certain time-slot if exactly one of its neighbors transmits at that time-slot. We assume no collision-detection mechanism; i.e., it is not always possible to distinguish the case where no neighbor transmits from the case where several neighbors transmit simultaneously. We present a randomized protocol that achieves broadcast in time which is optimal up to a logarithmic factor. In particular, with probability 1 --E, the protocol achieves broadcast within O((D + log n s) ‘log n) time-slots, where n is the number of processors in the network and D its diameter. On the other hand, we prove a linear lower bound on the deterministic time-complexity of broadcast in this model. Namely, we show that any deterministic broadcast protocol requires 8(n) time-slots, even if the network has diameter 3, and n is known to all processors. These two results demonstrate an exponential gap in complexity between randomization and determinism.",
"We consider the problem of broadcasting in an unknown radio network modeled as a directed graph @math , where @math . In unknown networks, every node knows only its own label, while it is unaware of any other parameter of the network, including its neighborhood and even any upper bound on the number of nodes. We show an @math upper bound on the time complexity of deterministic broadcasting. This is an improvement over the currently best upper bound @math for arbitrary networks, thus shrinking exponentially the existing gap between the lower bound @math and the upper bound from @math to @math .",
"In this paper we present new randomized and deterministic algorithms for the classical problem of broadcasting in radio networks with unknown topology. We consider directed n-node radio networks with specified eccentricity D (maximum distance from the source node to any other node). Bar- presented an algorithm that for any n-node radio network with eccentricity D completes the broadcasting in O(D log n + log2 n) time, with high probability. This result is almost optimal, since as it has been shown by Kushilevitz and Mansour and , every randomized algorithm requires Ω (D log(n D) + log2 n) expected time to complete broadcasting.Our first main result closes the gap between the lower and upper bound: we describe an optimal randomized broadcasting algorithm whose running time complexity is O(D log (n D + log2 n), with high probability. In particular, we obtain a randomized algorithm that completes broadcasting in any n-node radio network in time O(n), with high probability.The main source of our improvement is a better \"selecting sequence\" used by the algorithm that brings some stronger property and improves the broadcasting time. Two types of \"selecting sequences\" are considered: randomized and deterministic ones. The algorithm with a randomized sequence is easier (more intuitive) to analyze but both randomized and deterministic sequences give algorithms of the same asymptotic complexity.",
""
]
}
|
1208.6406
|
1986220066
|
Cloud Computing has emerged as a successful computing paradigm for efficiently utilizing managed compute infrastructure such as high speed rack-mounted servers, connected with high speed networking, and reliable storage. Usually such infrastructure is dedicated, physically secured and has reliable power and networking infrastructure. However, much of our idle compute capacity is present in unmanaged infrastructure like idle desktops, lab machines, physically distant server machines, and laptops. We present a scheme to utilize this idle compute capacity on a best-effort basis and provide high availability even in face of failure of individual components or facilities. We run virtual machines on the commodity infrastructure and present a cloud interface to our end users. The primary challenge is to maintain availability in the presence of node failures, network failures, and power failures. We run multiple copies of a Virtual Machine (VM) redundantly on geographically dispersed physical machines to achieve availability. If one of the running copies of a VM fails, we seamlessly switchover to another running copy. We use Virtual Machine Record Replay capability to implement this redundancy and switchover. In current progress, we have implemented VM Record Replay for uniprocessor machines over Linux KVM and are currently working on VM Record Replay on shared-memory multiprocessor machines. We report initial experimental results based on our implementation.
|
Previous efforts on utilizing idle compute capacity include SETI@Home @cite_1 , Folding@Home @cite_7 , BOINC project @cite_5 , etc. Our work has a similar philosophy. The difference is in the level of abstraction. These previous efforts require that the programs be written to a specific programming model, and then provide a middleware which needs to be installed in all participating host machines. The middleware then coordinates and schedules the client programs. In contrast, our abstraction is more general (and often more powerful) than the middleware approach. Our computation units are VMs, allowing a client full freedom to run her favourite OS and applications on the participating hosts, without compromising security and reliability. To provide reliability, the middleware-based approaches usually constrain the programming model for easy restartability. In contrast, we allow a completely flexible programming model and provide reliability through efficient recording and replaying.
|
{
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_7"
],
"mid": [
"2142863519",
"2103363198",
"1673352269"
],
"abstract": [
"BOINC (Berkeley Open Infrastructure for Network Computing) is a software system that makes it easy for scientists to create and operate public-resource computing projects. It supports diverse applications, including those with large storage or communication requirements. PC owners can participate in multiple BOINC projects, and can specify how their resources are allocated among these projects. We describe the goals of BOINC, the design issues that we confronted, and our solutions to these problems.",
"Millions of computer owners worldwide contribute computer time to the search for extraterrestrial intelligence, performing the largest computation ever.",
"For decades, researchers have been applying computer simulation to address problems in biology. However, many of these \"grand challenges\" in computational biology, such as simulating how proteins fold, remained unsolved due to their great complexity. Indeed, even to simulate the fastest folding protein would require decades on the fastest modern CPUs. Here, we review novel methods to fundamentally speed such previously intractable problems using a new computational paradigm: distributed computing. By efficiently harnessing tens of thousands of computers throughout the world, we have been able to break previous computational barriers. However, distributed computing brings new challenges, such as how to efficiently divide a complex calculation of many PCs that are connected by relatively slow networking. Moreover, even if the challenge of accurately reproducing reality can be conquered, a new challenge emerges: how can we take the results of these simulations (typically tens to hundreds of gigabytes of raw data) and gain some insight into the questions at hand. This challenge of the analysis of the sea of data resulting from large-scale simulation will likely remain for decades to come."
]
}
|
1208.6406
|
1986220066
|
Cloud Computing has emerged as a successful computing paradigm for efficiently utilizing managed compute infrastructure such as high speed rack-mounted servers, connected with high speed networking, and reliable storage. Usually such infrastructure is dedicated, physically secured and has reliable power and networking infrastructure. However, much of our idle compute capacity is present in unmanaged infrastructure like idle desktops, lab machines, physically distant server machines, and laptops. We present a scheme to utilize this idle compute capacity on a best-effort basis and provide high availability even in face of failure of individual components or facilities. We run virtual machines on the commodity infrastructure and present a cloud interface to our end users. The primary challenge is to maintain availability in the presence of node failures, network failures, and power failures. We run multiple copies of a Virtual Machine (VM) redundantly on geographically dispersed physical machines to achieve availability. If one of the running copies of a VM fails, we seamlessly switchover to another running copy. We use Virtual Machine Record Replay capability to implement this redundancy and switchover. In current progress, we have implemented VM Record Replay for uniprocessor machines over Linux KVM and are currently working on VM Record Replay on shared-memory multiprocessor machines. We report initial experimental results based on our implementation.
|
Our current prototype can efficiently record and replay a uniprocessor VM. We have also implemented record replay for multiprocessor VMs. Multiprocessor record replay is significantly harder due to the presence of race conditions on shared memory by multiple processors. We have implemented a page-ownership scheme based on CREW (concurrent read exclusive write) protocol @cite_8 to record and replay a guest OS. We can successfully replay an unmodified guest, albeit at high overheads. The overheads depend on the workload and could be as high as 2-3x slowdowns for 2-processor VMs. Another approach, DoublePlay @cite_6 , has been proposed to make multiprocessor record replay more performant. DoublePlay works by recording the order of all synchronization operations in the program being recorded. Because a guest OS could have arbitrary synchronization primitives, it is hard to directly use DoublePlay's ideas for VM Record Replay. We are currently working on approaches to make multiprocessor VM record replay faster.
|
{
"cite_N": [
"@cite_6",
"@cite_8"
],
"mid": [
"2115855199",
"2171956059"
],
"abstract": [
"Deterministic replay systems record and reproduce the execution of a hardware or software system. In contrast to replaying execution on uniprocessors, deterministic replay on multiprocessors is very challenging to implement efficiently because of the need to reproduce the order or values read by shared memory operations performed by multiple threads. In this paper, we present DoublePlay, a new way to efficiently guarantee replay on commodity multiprocessors. Our key insight is that one can use the simpler and faster mechanisms of single-processor record and replay, yet still achieve the scalability offered by multiple cores, by using an additional execution to parallelize the record and replay of an application. DoublePlay timeslices multiple threads on a single processor, then runs multiple time intervals (epochs) of the program concurrently on separate processors. This strategy, which we call uniparallelism, makes logging much easier because each epoch runs on a single processor (so threads in an epoch never simultaneously access the same memory) and different epochs operate on different copies of the memory. Thus, rather than logging the order of shared-memory accesses, we need only log the order in which threads in an epoch are timesliced on the processor. DoublePlay runs an additional execution of the program on multiple processors to generate checkpoints so that epochs run in parallel. We evaluate DoublePlay on a variety of client, server, and scientific parallel benchmarks; with spare cores, DoublePlay reduces logging overhead to an average of 15 with two worker threads and 28 with four threads.",
"Execution replay of virtual machines is a technique which has many important applications, including debugging, fault-tolerance, and security. Execution replay for single processor virtual machines is well-understood, and available commercially. With the advancement of multi-core architectures, however, multiprocessor virtual machines are becoming more important. Our system, SMP-ReVirt, is the first system to log and replay a multiprocessor virtual machine on commodity hardware. We use hardware page protection to detect and accurately replay sharing between virtual cpus of a multi-cpu virtual machine, allowing us to replay the entire operating system and all applications. We have tested our system on a variety of workloads, and find that although sharing under SMP-ReVirt is expensive, for many workloads and applications, including debugging, the overhead is acceptable."
]
}
|
1208.5801
|
2949734647
|
Scientists study trajectory data to understand trends in movement patterns, such as human mobility for traffic analysis and urban planning. There is a pressing need for scalable and efficient techniques for analyzing this data and discovering the underlying patterns. In this paper, we introduce a novel technique which we call vector-field @math -means. The central idea of our approach is to use vector fields to induce a similarity notion between trajectories. Other clustering algorithms seek a representative trajectory that best describes each cluster, much like @math -means identifies a representative "center" for each cluster. Vector-field @math -means, on the other hand, recognizes that in all but the simplest examples, no single trajectory adequately describes a cluster. Our approach is based on the premise that movement trends in trajectory data can be modeled as flows within multiple vector fields, and the vector field itself is what defines each of the clusters. We also show how vector-field @math -means connects techniques for scalar field design on meshes and @math -means clustering. We present an algorithm that finds a locally optimal clustering of trajectories into vector fields, and demonstrate how vector-field @math -means can be used to mine patterns from trajectory data. We present experimental evidence of its effectiveness and efficiency using several datasets, including historical hurricane data, GPS tracks of people and vehicles, and anonymous call records from a large phone company. We compare our results to previous trajectory clustering techniques, and find that our algorithm performs faster in practice than the current state-of-the-art in trajectory clustering, in some examples by a large margin.
|
Like Rinzivillo et al., our overall approach falls within the broader category of visual and exploratory movement analysis , which exploits humans' ability to visually detect patterns, and then steer the visualization and analysis to those regions of greatest interest. is it? Andrienko and Andrienko @cite_25 @cite_49 @cite_38 have lead the field in this area. Their work has focused on human-in-the-loop analysis systems, but has also included more general aggregation and visualization of movement data @cite_19 , and most recently the identification of important locations and events by analyzing movement data @cite_21 @cite_1 .
|
{
"cite_N": [
"@cite_38",
"@cite_21",
"@cite_1",
"@cite_19",
"@cite_49",
"@cite_25"
],
"mid": [
"2005948381",
"2009658718",
"1737853022",
"2069469539",
"2023846135",
"2063989480"
],
"abstract": [
"One of the most common operations in exploration and analysis of various kinds of data is clustering, i.e. discovery and interpretation of groups of objects having similar properties and or behaviors. In clustering, objects are often treated as points in multi-dimensional space of properties. However, structurally complex objects, such as trajectories of moving entities and other kinds of spatio-temporal data, cannot be adequately represented in this manner. Such data require sophisticated and computationally intensive clustering algorithms, which are very hard to scale effectively to large datasets not fitting in the computer main memory. We propose an approach to extracting meaningful clusters from large databases by combining clustering and classification, which are driven by a human analyst through an interactive visual interface.",
"We propose a visual analytics procedure for analyzing movement data, i.e., recorded tracks of moving objects. It is oriented to a class of problems where it is required to determine significant places on the basis of certain types of events occurring repeatedly in movement data. The procedure consists of four major steps: (1) event extraction from trajectories; (2) event clustering and extraction of relevant places; (3) spatio-temporal aggregation of events or trajectories; (4) analysis of the aggregated data. All steps are scalable with respect to the amount of the data under analysis. We demonstrate the use of the procedure by example of two real-world problems requiring analysis at different spatial scales.",
"Movement data (trajectories of moving agents) are hard to visualize: numerous intersections and overlapping between trajectories make the display heavily cluttered and illegible. It is necessary to use appropriate data abstraction methods. We suggest a method for spatial generalization and aggregation of movement data, which transforms trajectories into aggregate flows between areas. It is assumed that no predefined areas are given. We have devised a special method for partitioning the underlying territory into appropriate areas. The method is based on extracting significant points from the trajectories. The resulting abstraction conveys essential characteristics of the movement. The degree of abstraction can be controlled through the parameters of the method. We introduce local and global numeric measures of the quality of the generalization, and suggest an approach to improve the quality in selected parts of the territory where this is deemed necessary. The suggested method can be used in interactive visual exploration of movement data and for creating legible flow maps for presentation purposes.",
"Data about movements of various objects are collected in growing amounts by means of current tracking technologies. Traditional approaches to visualization and interactive exploration of movement data cannot cope with data of such sizes. In this research paper we investigate the ways of using aggregation for visual analysis of movement data. We define aggregation methods suitable for movement data and find visualization and interaction techniques to represent results of aggregations and enable comprehensive exploration of the data. We consider two possible views of movement, traffic-oriented and trajectory-oriented. Each view requires different methods of analysis and of data aggregation. We illustrate our argument with example data resulting from tracking multiple cars in Milan and example analysis tasks from the domain of city traffic management.",
"The paper investigates the possibilities of using clustering techniques in visual exploration and analysis of large numbers of trajectories, that is, sequences of time-stamped locations of some moving entities. Trajectories are complex spatio-temporal constructs characterized by diverse non-trivial properties. To assess the degree of (dis)similarity between traiectories, specific methods (distance functions) are required. A single distance function accounting for all properties of trajectories, (1) is difficult to build, (2) would require much time to compute, and (3) might be difficult to understand and to use. We suggest the procedure of progressive clustering where a simple distance function with a clear meaning is applied on each step, which leads to easily interpretable outcomes. Successive application of several different functions enables sophisticated analyses through gradual refinement of earlier obtained results. Besides the advantages from the sense-making perspective, progressive clustering enables a rational work organization where time-consuming computations are applied to relatively small potentially interesting subsets obtained by means of 'cheap' distance functions producing quick results. We introduce the concept of progressive clustering by an example of analyzing a large real data set. We also review the existing clustering methods, describe the method OPTICS suitable for progressive clustering of trajectories, and briefly present several distance functions for trajectories.",
"With widespread availability of low cost GPS devices, it is becoming possible to record data about the movement of people and objects at a large scale. While these data hide important knowledge for the optimization of location and mobility oriented infrastructures and services, by themselves they lack the necessary semantic embedding which would make fully automatic algorithmic analysis possible. At the same time, making the semantic link is easy for humans who however cannot deal well with massive amounts of data. In this paper, we argue that by using the right visual analytics tools for the analysis of massive collections of movement data, it is possible to effectively support human analysts in understanding movement behaviors and mobility patterns. We suggest a framework for analysis combining interactive visual displays, which are essential for supporting human perception, cognition, and reasoning, with database operations and computational methods, which are necessary for handling large amounts of data. We demonstrate the synergistic use of these techniques in case studies of two real datasets."
]
}
|
1208.5801
|
2949734647
|
Scientists study trajectory data to understand trends in movement patterns, such as human mobility for traffic analysis and urban planning. There is a pressing need for scalable and efficient techniques for analyzing this data and discovering the underlying patterns. In this paper, we introduce a novel technique which we call vector-field @math -means. The central idea of our approach is to use vector fields to induce a similarity notion between trajectories. Other clustering algorithms seek a representative trajectory that best describes each cluster, much like @math -means identifies a representative "center" for each cluster. Vector-field @math -means, on the other hand, recognizes that in all but the simplest examples, no single trajectory adequately describes a cluster. Our approach is based on the premise that movement trends in trajectory data can be modeled as flows within multiple vector fields, and the vector field itself is what defines each of the clusters. We also show how vector-field @math -means connects techniques for scalar field design on meshes and @math -means clustering. We present an algorithm that finds a locally optimal clustering of trajectories into vector fields, and demonstrate how vector-field @math -means can be used to mine patterns from trajectory data. We present experimental evidence of its effectiveness and efficiency using several datasets, including historical hurricane data, GPS tracks of people and vehicles, and anonymous call records from a large phone company. We compare our results to previous trajectory clustering techniques, and find that our algorithm performs faster in practice than the current state-of-the-art in trajectory clustering, in some examples by a large margin.
|
Liu et al. @cite_9 also present a visual analytics system for exploring route diversity within a city, based on thousands of taxi trajectories. Their system offers global views of all trajectories, but also drills down to routes between source destination pairs, and even to specific road segments. Their work is more about examining trajectories and less about clustering them.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2062037128"
],
"abstract": [
"Route suggestion is an important feature of GPS navigation systems. Recently, Microsoft T-drive has been enabled to suggest routes chosen by experienced taxi drivers for given source destination pairs in given time periods, which often take less time than the routes calculated according to distance. However, in real environments, taxi drivers may use different routes to reach the same destination, which we call route diversity. In this paper we first propose a trajectory visualization method that examines the regions where the diversity exists and then develop several novel visualization techniques to display the high dimensional attributes and statistics associated with different routes to help users analyze diversity patterns. Our techniques have been applied to the real trajectory data of thousands of taxis and some interesting findings about route diversity have been obtained. We further demonstrate that our system can be used not only to suggest better routes for drivers but also to analyze traffic bottlenecks for transportation management."
]
}
|
1208.5801
|
2949734647
|
Scientists study trajectory data to understand trends in movement patterns, such as human mobility for traffic analysis and urban planning. There is a pressing need for scalable and efficient techniques for analyzing this data and discovering the underlying patterns. In this paper, we introduce a novel technique which we call vector-field @math -means. The central idea of our approach is to use vector fields to induce a similarity notion between trajectories. Other clustering algorithms seek a representative trajectory that best describes each cluster, much like @math -means identifies a representative "center" for each cluster. Vector-field @math -means, on the other hand, recognizes that in all but the simplest examples, no single trajectory adequately describes a cluster. Our approach is based on the premise that movement trends in trajectory data can be modeled as flows within multiple vector fields, and the vector field itself is what defines each of the clusters. We also show how vector-field @math -means connects techniques for scalar field design on meshes and @math -means clustering. We present an algorithm that finds a locally optimal clustering of trajectories into vector fields, and demonstrate how vector-field @math -means can be used to mine patterns from trajectory data. We present experimental evidence of its effectiveness and efficiency using several datasets, including historical hurricane data, GPS tracks of people and vehicles, and anonymous call records from a large phone company. We compare our results to previous trajectory clustering techniques, and find that our algorithm performs faster in practice than the current state-of-the-art in trajectory clustering, in some examples by a large margin.
|
Vector fields have been widely used in scientific visualization and even by some researchers doing trajectory clustering analysis to show speed and direction of animal movements @cite_30 and wind @cite_32 . In these cases, they have only been used to visualize the results, rather than as an integral part of the underlying clustering technique.
|
{
"cite_N": [
"@cite_30",
"@cite_32"
],
"mid": [
"2083930763",
"1990743832"
],
"abstract": [
"Abstract This work presents an exploratory data analysis of the trajectories of deer and elk moving about in the Starkey Experimental Forest and Range in eastern Oregon. The animals’ movements may be affected by habitat variables and the behavior of the other animals. In the work of this paper a stochastic differential equation-based model is developed in successive stages. Equations of motion are set down motivated by corresponding equations of physics. Functional parameters appearing in the equations are estimated nonparametrically and plots of vector fields of animal movements are prepared. Residuals are used to look for interactions amongst the movements of the animals. There are exploratory analyses of various sorts. Statistical inferences are based on Fourier transforms of the data, which are unequally spaced. The sections of the paper start with motivating quotes and aphorisms from the writings of John W. Tukey.",
"Abstract A new probabilistic clustering method, based on a regression mixture model, is used to describe tropical cyclone (TC) propagation in the western North Pacific (WNP). Seven clusters were obtained and described in Part I of this two-part study. In Part II, the present paper, the large-scale patterns of atmospheric circulation and sea surface temperature associated with each of the clusters are investigated, as well as associations with the phase of the El Nino–Southern Oscillation (ENSO). Composite wind field maps over the WNP provide a physically consistent picture of each TC type, and of its seasonality. Anomalous vorticity and outgoing longwave radiation indicate changes in the monsoon trough associated with different types of TC genesis and trajectory. The steering winds at 500 hPa are more zonal in the straight-moving clusters, with larger meridional components in the recurving ones. Higher values of vertical wind shear in the midlatitudes also accompany the straight-moving tracks, compared to..."
]
}
|
1208.4895
|
2950238172
|
We study a general framework for broadcast gossip algorithms which use companion variables to solve the average consensus problem. Each node maintains an initial state and a companion variable. Iterative updates are performed asynchronously whereby one random node broadcasts its current state and companion variable and all other nodes receiving the broadcast update their state and companion variable. We provide conditions under which this scheme is guaranteed to converge to a consensus solution, where all nodes have the same limiting values, on any strongly connected directed graph. Under stronger conditions, which are reasonable when the underlying communication graph is undirected, we guarantee that the consensus value is equal to the average, both in expectation and in the mean-squared sense. Our analysis uses tools from non-negative matrix theory and perturbation theory. The perturbation results rely on a parameter being sufficiently small. We characterize the allowable upper bound as well as the optimal setting for the perturbation parameter as a function of the network topology, and this allows us to characterize the worst-case rate of convergence. Simulations illustrate that, in comparison to existing broadcast gossip algorithms, the approaches proposed in this paper have the advantage that they simultaneously can be guaranteed to converge to the average consensus and they converge in a small number of broadcasts.
|
Subsequent recent work @cite_2 investigates related BGAs, demonstrating that their convergence properties are robust even when the broadcasts from different nodes may interfere at a receiver. A broadcast-based algorithm has also been proposed for solving distributed convex optimization problems @cite_16 .
|
{
"cite_N": [
"@cite_16",
"@cite_2"
],
"mid": [
"2130263842",
"2132612558"
],
"abstract": [
"We consider a distributed multi-agent network system where each agent has its own convex objective function, which can be evaluated with stochastic errors. The problem consists of minimizing the sum of the agent functions over a commonly known constraint set, but without a central coordinator and without agents sharing the explicit form of their objectives. We propose an asynchronous broadcast-based algorithm where the communications over the network are subject to random link failures. We investigate the convergence properties of the algorithm for a diminishing (random) stepsize and a constant stepsize, where each agent chooses its own stepsize independently of the other agents. Under some standard conditions on the gradient errors, we establish almost sure convergence of the method to an optimal point for diminishing stepsize. For constant stepsize, we establish some error bounds on the expected distance from the optimal point and the expected function value. We also provide numerical results.",
"In this paper, we study two related iterative randomized algorithms for distributed computation of averages. The first algorithm is the Broadcast Gossip Algorithm, in which at each iteration one randomly selected node broadcasts its own state to its neighbors. The second algorithm is a novel variation of the former, in which at each iteration every node is allowed to broadcast: hence, this algorithm, which we call Collision Broadcast Gossip Algorithm (CBGA), is affected by interference among messages. The performance of both algorithms is evaluated in terms of rate of convergence and asymptotical error: focusing on large Abelian Cayley networks, we highlight the role of topology and of design parameters. We show that on fully connected graphs the rate of convergence is bounded away from one, whereas the asymptotical error is bounded away from zero. On the contrary, on sparse graphs the rate of convergence goes to one and the asymptotical error goes to zero, as the size of the network grows larger. Our results also show that the performance of the CBGA is close to the performance of the BGA: this indicates the robustness of broadcast gossip algorithms to interferences."
]
}
|
1208.4895
|
2950238172
|
We study a general framework for broadcast gossip algorithms which use companion variables to solve the average consensus problem. Each node maintains an initial state and a companion variable. Iterative updates are performed asynchronously whereby one random node broadcasts its current state and companion variable and all other nodes receiving the broadcast update their state and companion variable. We provide conditions under which this scheme is guaranteed to converge to a consensus solution, where all nodes have the same limiting values, on any strongly connected directed graph. Under stronger conditions, which are reasonable when the underlying communication graph is undirected, we guarantee that the consensus value is equal to the average, both in expectation and in the mean-squared sense. Our analysis uses tools from non-negative matrix theory and perturbation theory. The perturbation results rely on a parameter being sufficiently small. We characterize the allowable upper bound as well as the optimal setting for the perturbation parameter as a function of the network topology, and this allows us to characterize the worst-case rate of convergence. Simulations illustrate that, in comparison to existing broadcast gossip algorithms, the approaches proposed in this paper have the advantage that they simultaneously can be guaranteed to converge to the average consensus and they converge in a small number of broadcasts.
|
A modified BGA is proposed by @cite_6 @cite_4 , where nodes maintain a companion (or surplus) variable in addition to the state variable they seek to average. By careful accounting of both the companion and state variables, a conservation principle is established, and simulation results suggest that the algorithm with companion variables converges to the average consensus for all sample paths, not just in expectation. However, no proof of convergence or theoretical convergence rate analysis is available for the algorithm of @cite_6 @cite_4 .
|
{
"cite_N": [
"@cite_4",
"@cite_6"
],
"mid": [
"2133339412",
"2014496162"
],
"abstract": [
"In this paper, we propose a new decentralized algorithm to solve the consensus on the average problem on sensor networks through a gossip algorithm based on broadcasts. We directly extend previous results by not requiring that the digraph representing the network topology be balanced. Our algorithm is an improvement with respect to known gossip algorithms based on broadcasts in that the average of the initial state is preserved after each broadcast. The nodes are assumed to know their out-degree anytime they transmit information.",
"Abstract —In this paper we propose a new decentralized algorithm to solve the consensus on the average problem on arbitrary strongly connected digraphs through a gossip algorithm based on broadcasts. We directly extend previous results by not requiring that the digraph is balanced. Our algorithm is an improvement respect to known gossip algorithms based on broadcasts in that the average of the initial state is preserved after each broadcast. The nodes are assumed to know their out-degree anytime they transmit information. The algorithm convergence analysis is preliminary and performance is shown by simulations."
]
}
|
1208.4895
|
2950238172
|
We study a general framework for broadcast gossip algorithms which use companion variables to solve the average consensus problem. Each node maintains an initial state and a companion variable. Iterative updates are performed asynchronously whereby one random node broadcasts its current state and companion variable and all other nodes receiving the broadcast update their state and companion variable. We provide conditions under which this scheme is guaranteed to converge to a consensus solution, where all nodes have the same limiting values, on any strongly connected directed graph. Under stronger conditions, which are reasonable when the underlying communication graph is undirected, we guarantee that the consensus value is equal to the average, both in expectation and in the mean-squared sense. Our analysis uses tools from non-negative matrix theory and perturbation theory. The perturbation results rely on a parameter being sufficiently small. We characterize the allowable upper bound as well as the optimal setting for the perturbation parameter as a function of the network topology, and this allows us to characterize the worst-case rate of convergence. Simulations illustrate that, in comparison to existing broadcast gossip algorithms, the approaches proposed in this paper have the advantage that they simultaneously can be guaranteed to converge to the average consensus and they converge in a small number of broadcasts.
|
Recent work of Cai and Ishii @cite_11 @cite_19 analyzes related distributed averaging algorithms on directed graphs that use companion variables. The two types of algorithms analyzed in @cite_11 @cite_19 involve asynchronous pairwise updates and synchronous updates. They make use of tools from matrix perturbation theory, and the work in the present article can be seen as generalizing the results in @cite_11 @cite_19 for broadcast gossip updates.
|
{
"cite_N": [
"@cite_19",
"@cite_11"
],
"mid": [
"2084044206",
"2069512951"
],
"abstract": [
"We study the average consensus problem of multi-agent systems for general network topologies with unidirectional information flow. We propose two linear distributed algorithms, deterministic and gossip, respectively for the cases where the inter-agent communication is synchronous and asynchronous. In both cases, the developed algorithms guarantee state averaging on arbitrary strongly connected digraphs; in particular, this graphical condition does not require that the network be balanced or symmetric, thereby extending previous results in the literature. The key novelty of our approach is to augment an additional variable for each agent, called \"surplus\", whose function is to locally record individual state updates. For convergence analysis, we employ graph-theoretic and nonnegative matrix tools, plus the eigenvalue perturbation theory playing a crucial role.",
"We study the average consensus problem of multiagent systems for general network topologies with unidirectional information flow. We propose a linear distributed algorithm which guarantees state averaging on arbitrary strongly connected digraphs. In particular, this graphical condition does not require that the network be balanced or symmetric, thereby extending the previous results in the literature. The novelty of our approach is the augmentation of an additional variable for each agent, called “surplus”, whose function is to locally record individual state updates. For convergence analysis, we employ graph-theoretic and nonnegative matrix tools, with the eigenvalue perturbation theory playing a crucial role."
]
}
|
1208.5062
|
2038869097
|
This paper describes a novel approach to change-point detection when the observed high-dimensional data may have missing elements. The performance of classical methods for change-point detection typically scales poorly with the dimensionality of the data, so that a large number of observations are collected after the true change-point before it can be reliably detected. Furthermore, missing components in the observed data handicap conventional approaches. The proposed method addresses these challenges by modeling the dynamic distribution underlying the data as lying close to a time-varying low-dimensional submanifold embedded within the ambient observation space. Specifically, streaming data is used to track a submanifold approximation, measure deviations from this approximation, and calculate a series of statistics of the deviations for detecting when the underlying manifold has changed in a sharp or unexpected manner. The approach described in this paper leverages several recent results in the field of high-dimensional data analysis, including subspace tracking with missing data, multiscale analysis techniques for point clouds, online optimization, and change-point detection performance analysis. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
|
Our approach also has close connections with Gaussian Mixture Models (GMMs) @cite_5 @cite_64 @cite_22 @cite_14 . The basic idea here is to approximate a probability density with a mixture of Gaussian distributions, each with its own mean and covariance matrix. The number of mixture components is typically fixed, limiting the memory demands of the estimate, and online expectation-maximization algorithms can be used to track a time-varying density @cite_57 . In the fixed sample-size setting, there has been work reducing the number of components in GMMs while preserving the component structure of the original model @cite_14 . However, this approach faces several challenges in our setting. In particular, choosing the number of mixture components is challenging even in batch settings, and the issue is aggravated in online settings where the ideal number of mixture components may vary over time. In the online setting, splitting and merging Gaussian components of an already learned precise GMM has been considered in @cite_40 . However, learning a precise GMM online is impractical when data are high-dimensional because, without additional modeling assumptions, tracking the covariance matrices for each of the mixture components is very ill-posed in high-dimensional settings.
|
{
"cite_N": [
"@cite_14",
"@cite_64",
"@cite_22",
"@cite_57",
"@cite_40",
"@cite_5"
],
"mid": [
"2160586919",
"2117146852",
"",
"2117853077",
"1528914981",
"1579271636"
],
"abstract": [
"In this paper we propose an efficient algorithm for reducing a large mixture of Gaussians into a smaller mixture while still preserving the component structure of the original model; this is achieved by clustering (grouping) the components. The method minimizes a new, easily computed distance measure between two Gaussian mixtures that can be motivated from a suitable stochastic model and the iterations of the algorithm use only the model parameters, avoiding the need for explicit resampling of datapoints. We demonstrate the method by performing hierarchical clustering of scenery images and handwritten digits.",
"We analyze mixture density approximation and estimation. We form a convex set of density functions by taking the convex hull of a parametric family, e.g. mixtures of the Gaussian location family. A sequence of finite mixture densities is formulated to provide a parsimonious approximation for the target density. If the target density itself is in the convex hull, we show that the approximation error goes to zero with a rate of 1 k, where k is the number of components in the approximation. If the target density is outside of the convex hull, the approximation error is equal to the best achievable error plus a term that goes to zero with a rate of 1 k. A greedy algorithm that introduces one component at each step is shown to achieve such an error rate. Similarly, a greedy estimation algorithm is provided to find such approximation for data from an arbitrary density. This algorithm estimates one mixture component at one time. We prove that such an algorithm achieves a likelihood nearly as good as the MLE (maximum likelihood estimate) over the whole convex hull. And we identify the difference as being bounded by order O(1 k), where k is the number of components in the estimate. Risks of such estimators are shown to be bounded by a sum of approximation error and estimation error. The error terms are identified. An optimal choice of k can be derived by minimizing the risk bound. Acting as a similar role as the bandwidth in non-parainetric density estimation, k controls two error terms in opposite directions. A large k reduces approximation error and increases estimation error. A MDL (minimum description length) principle is derived to provide an estimation method for k. And the estimated k is shown to achieve the risk bound as if we know the best k in advance. A new information projection theory is derived to expand the approximating class to include its information closure. We prove the existence and uniqueness of a f* in the closure of the convex hull C (in a sense we identify), such that D ( fpf* ) = infg∈CD fpg , where Dfpg is the Kullback-Leibler divergence. And log(fk) → log(f*) in L1 (f) for any sequence fk in C with Dfpfk→ infg∈CD fpg. Other characterizing properties of f* are also given.",
"",
"The first unified account of the theory, methodology, and applications of the EM algorithm and its extensionsSince its inception in 1977, the Expectation-Maximization (EM) algorithm has been the subject of intense scrutiny, dozens of applications, numerous extensions, and thousands of publications. The algorithm and its extensions are now standard tools applied to incomplete data problems in virtually every field in which statistical methods are used. Until now, however, no single source offered a complete and unified treatment of the subject.The EM Algorithm and Extensions describes the formulation of the EM algorithm, details its methodology, discusses its implementation, and illustrates applications in many statistical contexts. Employing numerous examples, Geoffrey McLachlan and Thriyambakam Krishnan examine applications both in evidently incomplete data situations-where data are missing, distributions are truncated, or observations are censored or grouped-and in a broad variety of situations in which incompleteness is neither natural nor evident. They point out the algorithm's shortcomings and explain how these are addressed in the various extensions.Areas of application discussed include: Regression Medical imaging Categorical data analysis Finite mixture analysis Factor analysis Robust statistical modeling Variance-components estimation Survival analysis Repeated-measures designs For theoreticians, practitioners, and graduate students in statistics as well as researchers in the social and physical sciences, The EM Algorithm and Extensions opens the door to the tremendous potential of this remarkably versatile statistical tool.",
"Meta-sulfonamido-benzamide derivatives of the formula: [wherein R is a hydrogen atom or a lower alkyl, cyano, or lower alkanesulfonyl group; R1 is a lower alkyl, phenyl, amino, lower alkylamino, di(lower)alkylamino, or C4-C5 alkyleneamino group; R2 is a hydrogen or halogen atom or a lower alkyl, di(lower)alkylamino, or lower alkoxy group; R3 is a hydrogen atom or a methyl or methoxy group; R4 is a hydrogen or halogen atom; R5 is a lower alkyl, lower alkenyl, C3-C6 cycloalkyl, benzyl, or halogenobenzyl group; and n is 1 or zero] or their acid addition salts, showing pharmacological activity such as anti-emetic or psychotropic activity, are provided via several routes.",
"The important role of finite mixture models in the statistical analysis of data is underscored by the ever-increasing rate at which articles on mixture applications appear in the statistical and ge..."
]
}
|
1208.5062
|
2038869097
|
This paper describes a novel approach to change-point detection when the observed high-dimensional data may have missing elements. The performance of classical methods for change-point detection typically scales poorly with the dimensionality of the data, so that a large number of observations are collected after the true change-point before it can be reliably detected. Furthermore, missing components in the observed data handicap conventional approaches. The proposed method addresses these challenges by modeling the dynamic distribution underlying the data as lying close to a time-varying low-dimensional submanifold embedded within the ambient observation space. Specifically, streaming data is used to track a submanifold approximation, measure deviations from this approximation, and calculate a series of statistics of the deviations for detecting when the underlying manifold has changed in a sharp or unexpected manner. The approach described in this paper leverages several recent results in the field of high-dimensional data analysis, including subspace tracking with missing data, multiscale analysis techniques for point clouds, online optimization, and change-point detection performance analysis. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
|
Our approach is also closely related to Geometric Multi-Resolution Analysis (GMRA) @cite_19 , which was developed for analyzing intrinsically low-dimensional point clouds in high-dimensional spaces. The basic idea of GMRA is to first iteratively partition a dataset to form a multiscale collection of subsets of the data, then find a low-rank approximation for the data in each subset, and finally efficiently encode the difference between the low-rank approximations at different scales. This approach is a batch method without a straightforward extension to online settings.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"2964012263"
],
"abstract": [
"Data sets are often modeled as samples from a probability distribution in RD, for D large. It is often assumed that the data has some interesting low-dimensional structure, for example that of a d-dimensional manifold M, with d much smaller than D. When M is simply a linear subspace, one may exploit this assumption for encoding efficiently the data by projecting onto a dictionary of d vectors in RD (for example found by SVD), at a cost (n+D)d for n data points. When M is nonlinear, there are no “explicit” and algorithmically efficient constructions of dictionaries that achieve a similar efficiency: typically one uses either random dictionaries, or dictionaries obtained by black-box global optimization. In this paper we construct data-dependent multi-scale dictionaries that aim at efficiently encoding and manipulating the data. Their construction is fast, and so are the algorithms that map data points to dictionary coefficients and vice versa, in contrast with L1-type sparsity-seeking algorithms, but like adaptive nonlinear approximation in classical multi-scale analysis. In addition, data points are guaranteed to have a compressible representation in terms of the dictionary, depending on the assumptions on the geometry of the underlying probability distribution."
]
}
|
1208.5075
|
2186137094
|
Consider a synchronous point-to-point network of n nodes connected by directed links, wherein each node has a binary input. This paper proves a tight necessary and sucient condition on the underlying communication topology for achieving Byzantine consensus among these nodes in the presence of up to f Byzantine faults. We derive a necessary condition, and then we provide a constructive proof of suciency by presenting a Byzantine consensus algorithm for directed graphs that satisfy the necessary condition.
|
Lamport, Shostak, and Pease addressed the Byzantine agreement problem in @cite_13 . Subsequent work @cite_0 @cite_12 characterized the necessary and sufficient conditions under which the problem is solvable in undirected graphs. However, as noted above, these conditions are not adequate to fully characterize the directed graphs in which Byzantine consensus is feasible. In this work, we identify tight necessary and sufficient conditions for Byzantine consensus in directed graphs. The necessity proof presented in this paper is based on the state-machine approach, which was originally developed for conditions in undirected graphs @cite_0 @cite_3 @cite_15 ; however, due to the nature of directed links, our necessity proof is a non-trivial extension. The technique is also similar to the withholding mechanism, which was developed by Schmid, Weiss, and Keidar @cite_8 to prove impossibility results and lower bounds for the number of nodes for synchronous consensus under transient link failures in fully-connected graphs; however, we do not assume the transient fault model as in @cite_8 , and thus, our argument is more straightforward.
|
{
"cite_N": [
"@cite_8",
"@cite_3",
"@cite_0",
"@cite_15",
"@cite_13",
"@cite_12"
],
"mid": [
"2086286040",
"2014772227",
"2095464745",
"1526359699",
"2120510885",
""
],
"abstract": [
"We provide a suite of impossibility results and lower bounds for the required number of processes and rounds for synchronous consensus under transient link failures. Our results show that consensus can be solved even in the presence of @math moving omission and or arbitrary link failures per round, provided that both the number of affected outgoing and incoming links of every process is bounded. Providing a step further toward the weakest conditions under which consensus is solvable, our findings are applicable to a variety of dynamic phenomena such as transient communication failures and end-to-end delay variations. We also prove that our model surpasses alternative link failure modeling approaches in terms of assumption coverage.",
"Can unanimity be achieved in an unreliable distributed system? This problem was named \"The Byzantine Generals Problem,\" by Lamport, Pease and Shostak [1980]. The results obtained in the present paper prove that unanimity is achievable in any distributed system if and only if the number of faulty processors in the system is: 1) less than one third of the total number of processors; and 2) less than one half of the connectivity of the system''s network. In cases where unanimity is achievable, algorithms to obtain it are given. This result forms a complete characterization of networks in light of the Byzantine Problem.",
"Easy proofs are given, of the impossibility of solving several consensus problems (Byzantine agreement, weak agreement, Byzantine firing squad, approximate agreement and clock synchronization) in certain communication graphs.",
"1. Introduction.PART I: FUNDAMENTALS.2. Basic Algorithms in Message-Passing Systems.3. Leader Election in Rings.4. Mutual Exclusion in Shared Memory.5. Fault-Tolerant Consensus.6. Causality and Time.PART II: SIMULATIONS.7. A Formal Model for Simulations.8. Broadcast and Multicast.9. Distributed Shared Memory.10. Fault-Tolerant Simulations of Read Write Objects.11. Simulating Synchrony.12. Improving the Fault Tolerance of Algorithms.13. Fault-Tolerant Clock Synchronization.PART III: ADVANCED TOPICS.14. Randomization.15. Wait-Free Simulations of Arbitrary Objects.16. Problems Solvable in Asynchronous Systems.17. Solving Consensus in Eventually Stable Systems.References.Index.",
"Reliable computer systems must handle malfunctioning components that give conflicting information to different parts of the system. This situation can be expressed abstractly in terms of a group of generals of the Byzantine army camped with their troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan. However, one or more of them may be traitors who will try to confuse the others. The problem is to find an algorithm to ensure that the loyal generals will reach agreement. It is shown that, using only oral messages, this problem is solvable if and only if more than two-thirds of the generals are loyal; so a single traitor can confound two loyal generals. With unforgeable written messages, the problem is solvable for any number of generals and possible traitors. Applications of the solutions to reliable computer systems are then discussed.",
""
]
}
|
1208.5075
|
2186137094
|
Consider a synchronous point-to-point network of n nodes connected by directed links, wherein each node has a binary input. This paper proves a tight necessary and sucient condition on the underlying communication topology for achieving Byzantine consensus among these nodes in the presence of up to f Byzantine faults. We derive a necessary condition, and then we provide a constructive proof of suciency by presenting a Byzantine consensus algorithm for directed graphs that satisfy the necessary condition.
|
In related work, @cite_17 identified tight conditions for achieving Byzantine consensus in undirected graphs using authentication . discovered that all-pair reliable communication is not necessary to achieve consensus when using authentication. Our work differs from in that our results apply in the absence of authentication or any other security primitives; also our results apply to directed graphs. We show that even in the absence of authentication all-pair reliable communication is not necessary for Byzantine consensus.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"75522826"
],
"abstract": [
"Three decades ago, introduced the problem of Byzantine Agreement [PSL80] where nodes need to maintain a consistent view of the world in spite of the challenge posed by Byzantine faults. Subsequently, it is well known that Byzantine agreement over a completely connected synchronous network of n nodes tolerating up to t faults is (efficiently) possible if and only if t < n 3. further empowered the nodes with the ability to authenticate themselves and their messages and proved that agreement in this new model (popularly known as authenticated Byzantine agreement (ABA)) is possible if and only if t < n. (which is a huge improvement over the bound of t < n 3 in the absence of authentication for the same functionality). To understand the utility, potential and limitations of using authentication in distributed protocols for agreement, [GGBS10] studied ABA in new light. They generalize the existing models and thus, attempt to give a unified theory of agreements over the authenticated and non-authenticated domains. In this paper we extend their results to synchronous (undirected) networks and give a complete characterization of agreement protocols. As a corollary, we show that agreement can be strictly easier than all-pair point-to-point communication. It is well known that in a synchronous network over n nodes of which up to any t are corrupted by a Byzantine adversary, BA is possible only if all pair point-to-point reliable communication is possible [Dol82, DDWY93]. Thus, a folklore in the area is that maintaining global consistency (agreement) is at least as hard as the problem of all pair point-to-point communication. Equivalently, it is widely believed that protocols for BA over incomplete networks exist only if it is possible to simulate an overlay-ed complete network. Surprisingly, we show that the folklore is not always true. Thus, it seems that agreement protocols may be more fundamental to distributed computing than reliable communication."
]
}
|
1208.5075
|
2186137094
|
Consider a synchronous point-to-point network of n nodes connected by directed links, wherein each node has a binary input. This paper proves a tight necessary and sucient condition on the underlying communication topology for achieving Byzantine consensus among these nodes in the presence of up to f Byzantine faults. We derive a necessary condition, and then we provide a constructive proof of suciency by presenting a Byzantine consensus algorithm for directed graphs that satisfy the necessary condition.
|
Several papers have also addressed communication between a single source-receiver pair. @cite_7 studied the problem of secure communication, which achieves both fault-tolerance and perfect secrecy between a single source-receiver pair in undirected graphs, in the presence of node and link failures. Desmedt and Wang considered the same problem in directed graphs @cite_9 . In our work, we do not consider secrecy, and address the consensus problem rather than the single source-receiver pair problem. @cite_6 investigated reliable communication between a source-receiver pair in directed graphs allowing for an arbitrarily small error probability in the presence of a Byzantine failures. Our work addresses deterministically correct algorithms for consensus.
|
{
"cite_N": [
"@cite_9",
"@cite_6",
"@cite_7"
],
"mid": [
"2092148937",
"2088665600",
""
],
"abstract": [
"This paper studies the problem of perfectly secure communication in general network in which processors and communication lines may be faulty. Lower bounds are obtained on the connectivity required for successful secure communication. Efficient algorithms are obtained that operate with this connectivity and rely on no complexity-theoretic assumptions. These are the first algorithms for secure communication in a general network to simultaneously achieve the three goals of perfect secrecy, perfect resiliency, and worst-case time linear in the diameter of the network.",
"In the unconditionally reliable message transmission (URMT) problem, two non-faulty players, the sender S and the receiver R are part of a synchronous network modeled as a directed graph. S has a message that he wishes to send to R; the challenge is to design a protocol such that after exchanging messages as per the protocol, the receiver R should correctly obtain S's message with arbitrarily small error probability Δ, in spite of the influence of a Byzantine adversary that may actively corrupt up to t nodes in the network (we denote such a URMT protocol as (t, (1 - Δ))-reliable). While it is known that (2t + 1) vertex disjoint directed paths from S to R are necessary and sufficient for (t, 1)-reliable URMT (that is with zero error probability), we prove that a strictly weaker condition, which we define and denote as (2t, t)-special-connectivity, together with just (t+1) vertex disjoint directed paths from S to R, is necessary and sufficient for (t, (1' - Δ))-reliable URMT with arbitrarily small (but non-zero) error probability, Δ. Thus, we demonstrate the power of randomization in the context of reliable message transmission. In fact, for any positive integer k > 0, we show that there always exists a digraph Gk such that (k, 1)-reliable URMT is impossible over Gk whereas there exists a (2k, (1 - Δ))-reliable URMT protocol, Δ > 0 in Gk. In a digraph G on which (t, (1 - Δ))-reliable URMT is possible, an edge is called critical if the deletion of that edge renders (t, (1 - Δ))-reliable URMT impossible. We give an example of a digraph G on n vertices such that G has Ω(n2) critical edges. This is quite baffling since no such graph exists for the case of perfect reliable message transmission (or equivalently (t, 1)-reliable URMT) or when the underlying graph is undirected. Such is the anomalous behavior of URMT protocols (when \"randomness meet directedness\") that it makes it extremely hard to design efficient protocols over arbitrary digraphs. However, if URMT is possible between every pair of vertices in the network, then we present efficient protocols for the same.",
""
]
}
|
1208.5075
|
2186137094
|
Consider a synchronous point-to-point network of n nodes connected by directed links, wherein each node has a binary input. This paper proves a tight necessary and sucient condition on the underlying communication topology for achieving Byzantine consensus among these nodes in the presence of up to f Byzantine faults. We derive a necessary condition, and then we provide a constructive proof of suciency by presenting a Byzantine consensus algorithm for directed graphs that satisfy the necessary condition.
|
Our recent work @cite_5 @cite_2 @cite_18 has considered a restricted class of iterative algorithms for achieving approximate Byzantine consensus in directed graphs, where fault-free nodes must agree on values that are approximately equal to each other using iterative algorithms with limited memory. The conditions developed in such prior work are not necessary when no such restrictions are imposed. Independently, @cite_16 @cite_20 , and Zhang and Sundaram @cite_11 @cite_19 have developed results for iterative algorithms for approximate consensus under a weaker fault model, where a faulty node must send identical messages to all the neighbors. In this work, we consider the problem of exact consensus (i.e., the outputs at fault-free nodes must be exactly identical), and we do not impose any restriction on the algorithms or faulty nodes.
|
{
"cite_N": [
"@cite_18",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"1760770890",
"2962884567",
"1510375514",
"2025375132",
"2151901719",
"1973184571",
""
],
"abstract": [
"This work addresses Byzantine vector consensus, wherein the input at each process is a d-dimensional vector of reals, and each process is required to decide on a decision vector that is in the convex hull of the input vectors at the fault-free processes [9,12]. The input vector at each process may also be viewed as a point in the d-dimensional Euclidean space R d , where di¾?>i¾?0 is a finite integer. Recent work [9,12] has addressed Byzantine vector consensus, and presented algorithms with optimal fault tolerance in complete graphs. This paper considers Byzantine vector consensus in incomplete graphs using a restricted class of iterative algorithms that maintain only a small amount of memory across iterations. For such algorithms, we prove a necessary condition, and a sufficient condition, for the graphs to be able to solve the vector consensus problem iteratively. We present an iterative Byzantine vector consensus algorithm, and prove it correct under the sufficient condition. The necessary condition presented in this paper for vector consensus does not match with the sufficient condition for di¾?>i¾?1; thus, a weaker condition may potentially suffice for Byzantine vector consensus.",
"We study a graph-theoretic property known as robustness, which plays a key role in the behavior of certain classes of dynamics on networks (such as resilient consensus and contagion). This property is much stronger than other graph properties such as connectivity and minimum degree, in that one can construct graphs with high connectivity and minimum degree but low robustness. In this paper, we investigate the robustness of common random graph models for complex networks (Erdős-Renyi, geometric random, and preferential attachment graphs). We show that the notions of connectivity and robustness coincide on these random graph models: the properties share the same threshold function in the Erdős-Renyi model, cannot be very different in the geometric random graph model, and are equivalent in the preferential attachment model. This indicates that a variety of purely local diffusion dynamics will be effective at spreading information in such networks.",
"In this work, we consider a generalized fault model [7,9,5] that can be used to represent a wide range of failure scenarios, including correlated failures and non-uniform node reliabilities. Under the generalized fault model, we explore iterative approximate Byzantine consensus (IABC) algorithms [15] in arbitrary directed networks. We prove a tight necessary and sufficient condition on the underlying communication graph for the existence of IABC algorithms.",
"This paper proves a necessary and sufficient condition for the existence of iterative, algorithms that achieve approximate Byzantine consensus in arbitrary directed graphs, where each directed edge represents a communication channel between a pair of nodes. The class of iterative algorithms considered in this paper ensures that, after each iteration of the algorithm, the state of each fault-free node remains in the convex hull of the states of the fault-free nodes at the end of the previous iteration. The following convergence requirement is imposed: for any e > 0, after a sufficiently large number of iterations, the states of the fault-free nodes are guaranteed to be within e of each other. To the best of our knowledge, tight necessary and sufficient conditions for the existence of such iterative consensus algorithms in synchronous arbitrary point-to-point networks in presence of Byzantine faults, have not been developed previously. The methodology and results presented in this paper can also be extended to asynchronous systems.",
"This paper addresses the problem of resilient consensus in the presence of misbehaving nodes. Although it is typical to assume knowledge of at least some nonlocal information when studying secure and fault-tolerant consensus algorithms, this assumption is not suitable for large-scale dynamic networks. To remedy this, we emphasize the use of local strategies to deal with resilience to security breaches. We study a consensus protocol that uses only local information and we consider worst-case security breaches, where the compromised nodes have full knowledge of the network and the intentions of the other nodes. We provide necessary and sufficient conditions for the normal nodes to reach consensus despite the influence of the malicious nodes under different threat assumptions. These conditions are stated in terms of a novel graph-theoretic property referred to as network robustness.",
"This paper addresses the problem of resilient in-network consensus in the presence of misbehaving nodes. Secure and fault-tolerant consensus algorithms typically assume knowledge of nonlocal information; however, this assumption is not suitable for large-scale dynamic networks. To remedy this, we focus on local strategies that provide resilience to faults and compromised nodes. We design a consensus protocol based on local information that is resilient to worst-case security breaches, assuming the compromised nodes have full knowledge of the network and the intentions of the other nodes. We provide necessary and sufficient conditions for the normal nodes to reach asymptotic consensus despite the influence of the misbehaving nodes under different threat assumptions. We show that traditional metrics such as connectivity are not adequate to characterize the behavior of such algorithms, and develop a novel graph-theoretic property referred to as network robustness. Network robustness formalizes the notion of redundancy of direct information exchange between subsets of nodes in the network, and is a fundamental property for analyzing the behavior of certain distributed algorithms that use only local information.",
""
]
}
|
1208.5075
|
2186137094
|
Consider a synchronous point-to-point network of n nodes connected by directed links, wherein each node has a binary input. This paper proves a tight necessary and sucient condition on the underlying communication topology for achieving Byzantine consensus among these nodes in the presence of up to f Byzantine faults. We derive a necessary condition, and then we provide a constructive proof of suciency by presenting a Byzantine consensus algorithm for directed graphs that satisfy the necessary condition.
|
@cite_10 explored the problem of achieving exact consensus in unknown networks with Byzantine nodes, but the underlying communication graph is assumed to be fully-connected. In this work, the network is assumed to be known to all nodes, and may not be fully-connected.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2114960414"
],
"abstract": [
"Consensus is a fundamental building block used to solve many practical problems that appear on reliable distributed systems. In spite of the fact that consensus is being widely studied in the context of classical networks, few studies have been conducted in order to solve it in the context of dynamic and self-organizing systems characterized by unknown networks. While in a classical network the set of participants is static and known, in a scenario of unknown networks, the set and number of participants are previously unknown. This work goes one step further and studies the problem of Byzantine Fault-Tolerant Consensus with Unknown Participants , namely BFT-CUP. This new problem aims at solving consensus in unknown networks with the additional requirement that participants in the system can behave maliciously. This paper presents a solution for BFT-CUP that does not require digital signatures. The algorithms are shown to be optimal in terms of synchrony and knowledge connectivity among participants in the system."
]
}
|
1208.4475
|
2952422163
|
The fundamental building block of social influence is for one person to elicit a response in another. Researchers measuring a "response" in social media typically depend either on detailed models of human behavior or on platform-specific cues such as re-tweets, hash tags, URLs, or mentions. Most content on social networks is difficult to model because the modes and motivation of human expression are diverse and incompletely understood. We introduce content transfer, an information-theoretic measure with a predictive interpretation that directly quantifies the strength of the effect of one user's content on another's in a model-free way. Estimating this measure is made possible by combining recent advances in non-parametric entropy estimation with increasingly sophisticated tools for content representation. We demonstrate on Twitter data collected for thousands of users that content transfer is able to capture non-trivial, predictive relationships even for pairs of users not linked in the follower or mention graph. We suggest that this measure makes large quantities of previously under-utilized social media content accessible to rigorous statistical causal analysis.
|
Much research has focused on characterizing and identifying influential users that can facilitate information diffusion along social links. Researchers have suggested different characterizations of influentials based on various network centrality measures @cite_33 @cite_18 @cite_14 . For Twitter data, various influence measures include number of followers, mentions, retweets @cite_11 , Pagerank of follower network @cite_6 , size of the information cascades @cite_4 . More recent work has attempted to utilize temporal information through the influence--passivity score @cite_38 , and transfer entropy @cite_19 . None of those measures, however, take content into account. More recently, several authors have suggested topic-sensitive influence measures such as TwitterRank @cite_27 , which takes into account topical similarity among the users. Topic-specific re-tweeting behavior was examined in @cite_32 .
|
{
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_33",
"@cite_32",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_11"
],
"mid": [
"",
"2069153192",
"",
"",
"1854214752",
"2406116889",
"2101196063",
"2950906300",
"",
""
],
"abstract": [
"",
"Recent web search techniques augment traditional text matching with a global notion of \"importance\" based on the linkage structure of the web, such as in Google's PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance--for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques.",
"",
"",
"The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.",
"Twitter and other microblogs have rapidly become a significant means by which people communicate with the world and each other in near realtime. There has been a large number of studies surrounding these social media, focusing on areas such as information spread, various centrality measures, topic detection and more. However, one area which has not received much attention is trying to better understand what information is being spread and why it is being spread. This work looks to get a better understanding of what makes people spread information in tweets or microblogs through the use of retweeting. Several retweet behavior models are presented and evaluated on a Twitter data set consisting of over 768,000 tweets gathered from monitoring over 30,000 users for a period of one month. We evaluate the proposed models against each user and show how people use different retweet behavior models. For example, we find that although users in the majority of cases do not retweet information on topics that they themselves Tweet about as or from people who are \"like them\" (hence anti-homophily), we do find that models which do take homophily, or similarity, into account fits the observed retweet behaviors much better than other more general models which do not take this into account. We further find that, not surprisingly, people's retweeting behavior is better explained through multiple different models rather than one model.",
"Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it.",
"Recent research has explored the increasingly important role of social media by examining the dynamics of individual and group behavior, characterizing patterns of information diffusion, and identifying influential individuals. In this paper we suggest a measure of causal relationships between nodes based on the information-theoretic notion of transfer entropy, or information transfer. This theoretically grounded measure is based on dynamic information, captures fine-grain notions of influence, and admits a natural, predictive interpretation. Causal networks inferred by transfer entropy can differ significantly from static friendship networks because most friendship links are not useful for predicting future dynamics. We demonstrate through analysis of synthetic and real-world data that transfer entropy reveals meaningful hidden network structures. In addition to altering our notion of who is influential, transfer entropy allows us to differentiate between weak influence over large groups and strong influence over small groups.",
"",
""
]
}
|
1208.4475
|
2952422163
|
The fundamental building block of social influence is for one person to elicit a response in another. Researchers measuring a "response" in social media typically depend either on detailed models of human behavior or on platform-specific cues such as re-tweets, hash tags, URLs, or mentions. Most content on social networks is difficult to model because the modes and motivation of human expression are diverse and incompletely understood. We introduce content transfer, an information-theoretic measure with a predictive interpretation that directly quantifies the strength of the effect of one user's content on another's in a model-free way. Estimating this measure is made possible by combining recent advances in non-parametric entropy estimation with increasingly sophisticated tools for content representation. We demonstrate on Twitter data collected for thousands of users that content transfer is able to capture non-trivial, predictive relationships even for pairs of users not linked in the follower or mention graph. We suggest that this measure makes large quantities of previously under-utilized social media content accessible to rigorous statistical causal analysis.
|
A crucial component of our approach is based on the ability to estimate entropic quantities for very-high-dimensional random variables. Due to data sparsity, naive methods based on binning are not feasible. The binless approach for entropy estimation introduced in @cite_15 has been used for quantifying information in neural spike trains @cite_21 . The binless approach has been extended for estimating higher order entropic quantities such as mutual information @cite_34 , divergences between two distributions @cite_16 , and transfer entropy @cite_31 . We also note that a linear version of the transfer entropy known as Granger causality @cite_0 has been used recently for uncovering predictive causal relationships in neuroscience @cite_17 , genetics @cite_22 , climate modeling @cite_24 and various other applications.
|
{
"cite_N": [
"@cite_31",
"@cite_22",
"@cite_21",
"@cite_16",
"@cite_0",
"@cite_24",
"@cite_15",
"@cite_34",
"@cite_17"
],
"mid": [
"2007062492",
"2132663737",
"2169293107",
"2150879893",
"1607114662",
"2051148835",
"",
"2092939357",
"1964769652"
],
"abstract": [
"Uncovering the directionality of coupling is a significant step in understanding drive-response relationships in complex systems. In this paper, we discuss a nonparametric method for detecting the directionality of coupling based on the estimation of information theoretic functionals. We consider several different methods for estimating conditional mutual information. The behavior of each estimator with respect to its free parameter is shown using a linear model where an analytical estimate of conditional mutual information is available. Numerical experiments in detecting coupling directionality are performed using chaotic oscillators, where the influence of the phase extraction method and relative frequency ratio is investigated.",
"We consider the problem of discovering gene regulatory networks from time-series microarray data. Recently, graphical Granger modeling has gained considerable attention as a promising direction for addressing this problem. These methods apply graphical modeling methods on time-series data and invoke the notion of ‘Granger causality’ to make assertions on causality through inference on time-lagged effects. Existing algorithms, however, have neglected an important aspect of the problem—the group structure among the lagged temporal variables naturally imposed by the time series they belong to. Specifically, existing methods in computational biology share this shortcoming, as well as additional computational limitations, prohibiting their effective applications to the large datasets including a large number of genes and many data points. In the present article, we propose a novel methodology which we term ‘grouped graphical Granger modeling method’, which overcomes the limitations mentioned above by applying a regression method suited for high-dimensional and large data, and by leveraging the group structure among the lagged temporal variables according to the time series they belong to. We demonstrate the effectiveness of the proposed methodology on both simulated and actual gene expression data, specifically the human cancer cell (HeLa S3) cycle data. The simulation results show that the proposed methodology generally exhibits higher accuracy in recovering the underlying causal structure. Those on the gene expression data demonstrate that it leads to improved accuracy with respect to prediction of known links, and also uncovers additional causal relationships uncaptured by earlier works. Contact: aclozano@us.ibm.com",
"Understanding how neurons represent, process, and manipulate information is one of the main goals of neuroscience. These issues are fundamentally abstract, and information theory plays a key role in formalizing and addressing them. However, application of information theory to experimental data is fraught with many challenges. Meeting these challenges has led to a variety of innovative analytical techniques, with complementary domains of applicability, assumptions, and goals.",
"A new universal estimator of divergence is presented for multidimensional continuous densities based on k-nearest-neighbor (k-NN) distances. Assuming independent and identically distributed (i.i.d.) samples, the new estimator is proved to be asymptotically unbiased and mean-square consistent. In experiments with high-dimensional data, the k-NN approach generally exhibits faster convergence than previous algorithms. It is also shown that the speed of convergence of the k-NN method can be further improved by an adaptive choice of k.",
"Abstract A general definition of causality is introduced and then specialized to become operational. By considering simple examples a number of advantages, and also difficulties, with the definition are discussed. Tests based on the definitions are then considered and the use of post-sample data emphasized, rather than relying on the same data to fit a model and use it to test causality. It is suggested that a Bayesian viewpoint should be taken in interpreting the results of these tests. Finally, the results of a study relating advertising and consumption are briefly presented.",
"Attribution of climate change to causal factors has been based predominantly on simulations using physical climate models, which have inherent limitations in describing such a complex and chaotic system. We propose an alternative, data centric, approach that relies on actual measurements of climate observations and human and natural forcing factors. Specifically, we develop a novel method to infer causality from spatial-temporal data, as well as a procedure to incorporate extreme value modeling into our method in order to address the attribution of extreme climate events, such as heatwaves. Our experimental results on a real world dataset indicate that changes in temperature are not solely accounted for by solar radiance, but attributed more significantly to CO2 and other greenhouse gases. Combined with extreme value modeling, we also show that there has been a significant increase in the intensity of extreme temperatures, and that such changes in extreme temperature are also attributable to greenhouse gases. These preliminary results suggest that our approach can offer a useful alternative to the simulation-based approach to climate modeling and attribution, and provide valuable insights from a fresh perspective.",
"",
"We present two classes of improved estimators for mutual information @math , from samples of random points distributed according to some joint probability density @math . In contrast to conventional estimators based on binnings, they are based on entropy estimates from @math -nearest neighbor distances. This means that they are data efficient (with @math we resolve structures down to the smallest possible scales), adaptive (the resolution is higher where data are more numerous), and have minimal bias. Indeed, the bias of the underlying entropy estimates is mainly due to nonuniformity of the density at the smallest resolved scale, giving typically systematic errors which scale as functions of @math for @math points. Numerically, we find that both families become exact for independent distributions, i.e. the estimator @math vanishes (up to statistical fluctuations) if @math . This holds for all tested marginal distributions and for all dimensions of @math and @math . In addition, we give estimators for redundancies between more than two random variables. We compare our algorithms in detail with existing algorithms. Finally, we demonstrate the usefulness of our estimators for assessing the actual independence of components obtained from independent component analysis (ICA), for improving ICA, and for estimating the reliability of blind source separation.",
"We consider the question of evaluating causal relations among neurobiological signals. In particular, we study the relation between the directed transfer function (DTF) and the well-accepted Granger causality, and show that DTF can be interpreted within the framework of Granger causality. In addition, we propose a method to assess the significance of causality measures. Finally, we demonstrate the applications of these measures to simulated data and actual neurobiological recordings."
]
}
|
1208.3805
|
1977769089
|
The block Kaczmarz method is an iterative scheme for solving overdetermined least-squares problems. At each step, the algorithm projects the current iterate onto the solution space of a subset of the constraints. This paper describes a block Kaczmarz algorithm that uses a randomized control scheme to choose the subset at each step. This algorithm is the first block Kaczmarz method with an (expected) linear rate of convergence that can be expressed in terms of the geometric properties of the matrix and its submatrices. The analysis reveals that the algorithm is most effective when it is given a good row paving of the matrix, a partition of the rows into well-conditioned blocks. The operator theory literature provides detailed information about the existence and construction of good row pavings. Together, these results yield an efficient block Kaczmarz scheme that applies to many overdetermined least-squares problem.
|
The Kaczmarz method was originally introduced in the paper @cite_23 . It was reinvented by researchers in tomography @cite_24 under the appellation algebraic reconstruction technique'' (ART). See Byrne's book @cite_36 for a contemporary summary of this literature.
|
{
"cite_N": [
"@cite_24",
"@cite_36",
"@cite_23"
],
"mid": [
"1990919278",
"625703071",
""
],
"abstract": [
"Abstract We give a new method for direct reconstruction of three-dimensional objects from a few electron micrographs taken at angles which need not exceed a range of 60 degrees. The method works for totally asymmetric objects, and requires little computer time or storage. It is also applicable to X-ray photography, and may greatly reduce the exposure compared to current methods of body-section radiography.",
"Applied Iterative Methods is a self-contained treatise suitable as both a reference and a graduate-level textbook in the area of iterative algorithms. It is the first book to combine subjects such as optimization, convex analysis, and approximation theory and organize them around a detailed and mathematically sound treatment of iterative algorithms. Such algorithms are used in solving problems in a diverse area of applications, most notably in medical imaging such as emission and transmission tomography and magnetic-resonance imaging, as well as in intensity-modulated radiation therapy. Other applications, which lie outside of medicine, are remote sensing and hyperspectral imaging. This book details a great number of different iterative algorithms that are universally applicable.",
""
]
}
|
1208.3805
|
1977769089
|
The block Kaczmarz method is an iterative scheme for solving overdetermined least-squares problems. At each step, the algorithm projects the current iterate onto the solution space of a subset of the constraints. This paper describes a block Kaczmarz algorithm that uses a randomized control scheme to choose the subset at each step. This algorithm is the first block Kaczmarz method with an (expected) linear rate of convergence that can be expressed in terms of the geometric properties of the matrix and its submatrices. The analysis reveals that the algorithm is most effective when it is given a good row paving of the matrix, a partition of the rows into well-conditioned blocks. The operator theory literature provides detailed information about the existence and construction of good row pavings. Together, these results yield an efficient block Kaczmarz scheme that applies to many overdetermined least-squares problem.
|
The classical variants of the Kaczmarz method rely on deterministic mechanisms for selecting a row at each iteration. Indeed, the simplest version just cycles through the rows in order. It has long been known that the cyclic control scheme performs badly when the rows are arranged in an unhappy order @cite_5 . The literature contains empirical evidence @cite_32 that control mechanisms may be more effective, but until recently there was no compelling theoretical analysis to support this observation.
|
{
"cite_N": [
"@cite_5",
"@cite_32"
],
"mid": [
"2075350383",
"2135294617"
],
"abstract": [
"Spring balance apparatus including a weighing lever supported on a knife edge balance point with one side of the lever acted on by a mass to be weighed against a weighing spring coupled with the other side of the weighing lever, and including an arrangement for the compensation of a measuring error due to temperature change by modifying the ratio of transmission of forces applied to the weighing lever by means of a bimet allic element in which the inventive improvement comprises a knife edge member transferring the forces of the weighing spring to the other side of the weighing lever which knife-edge member has connected rigidly to it an actuating lever coupled to the bimet allic element to alter responsive to temperature changes the orientation of the knife edge member relative to the other side of the weighing lever and thereby modify the ratio of transmission of forces to compensate for measuring error that would otherwise occur due to temperature change.",
"Algebraic reconstruction techniques (ART) are iterative procedures for recovering objects from their projections. It is claimed that by a careful adjustment of the order in which the collected data are accessed during the reconstruction procedure and of the so-called relaxation parameters that are to be chosen in an algebraic reconstruction technique, ART can produce high-quality reconstructions with excellent computational efficiency. This is demonstrated by an example based on a particular (but realistic) medical imaging task, showing that ART can match the performance of the standard expectation-maximization approach for maximizing likelihood (from the point of view of that particular medical task), but at an order of magnitude less computational cost. >"
]
}
|
1208.3805
|
1977769089
|
The block Kaczmarz method is an iterative scheme for solving overdetermined least-squares problems. At each step, the algorithm projects the current iterate onto the solution space of a subset of the constraints. This paper describes a block Kaczmarz algorithm that uses a randomized control scheme to choose the subset at each step. This algorithm is the first block Kaczmarz method with an (expected) linear rate of convergence that can be expressed in terms of the geometric properties of the matrix and its submatrices. The analysis reveals that the algorithm is most effective when it is given a good row paving of the matrix, a partition of the rows into well-conditioned blocks. The operator theory literature provides detailed information about the existence and construction of good row pavings. Together, these results yield an efficient block Kaczmarz scheme that applies to many overdetermined least-squares problem.
|
The paper @cite_19 of Strohmer and Vershynin is significant because it provides the first explicit convergence proof for a randomized variant of the Kaczmarz algorithm. This work establishes that a randomized control scheme leads to an expected linear convergence rate, which can be written in terms of geometric properties of the matrix. In contrast, deterministic convergence analyses that appear in the literature often lead to expressions, e.g., [Eqn. (1.2)] XZ02:Method-Alternating , whose geometric meaning is not evident.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"2048305372"
],
"abstract": [
"The Kaczmarz method for solving linear systems of equations is an iterative algorithm that has found many applications ranging from computer tomography to digital signal processing. Despite the popularity of this method, useful theoretical estimates for its rate of convergence are still scarce. We introduce a randomized version of the Kaczmarz method for consistent, overdetermined linear systems and we prove that it converges with expected exponential rate. Furthermore, this is the first solver whose rate does not depend on the number of equations in the system. The solver does not even need to know the whole system but only a small random part of it. It thus outperforms all previously known methods on general extremely overdetermined systems. Even for moderately overdetermined systems, numerical simulations as well as theoretical analysis reveal that our algorithm can converge faster than the celebrated conjugate gradient algorithm. Furthermore, our theory and numerical simulations confirm a prediction of in the context of reconstructing bandlimited functions from nonuniform sampling."
]
}
|
1208.3805
|
1977769089
|
The block Kaczmarz method is an iterative scheme for solving overdetermined least-squares problems. At each step, the algorithm projects the current iterate onto the solution space of a subset of the constraints. This paper describes a block Kaczmarz algorithm that uses a randomized control scheme to choose the subset at each step. This algorithm is the first block Kaczmarz method with an (expected) linear rate of convergence that can be expressed in terms of the geometric properties of the matrix and its submatrices. The analysis reveals that the algorithm is most effective when it is given a good row paving of the matrix, a partition of the rows into well-conditioned blocks. The operator theory literature provides detailed information about the existence and construction of good row pavings. Together, these results yield an efficient block Kaczmarz scheme that applies to many overdetermined least-squares problem.
|
In the wake of Strohmer and Vershynin's work @cite_19 , several other researchers have written about randomized versions of the Kaczmarz scheme and related topics. In particular, Needell demonstrates that the randomized Kaczmarz method converges, even when the linear system is inconsistent @cite_1 . Zouzias and Freris @cite_47 exhibit a randomized procedure, based on ideas from @cite_13 , that can reduce the size of the residual @math . Leventhal and Lewis @cite_51 provide an analysis of a randomized iteration for solving least-squares problems with polyhedral constraints, while Richt 'a rik and Tak 'a c have extended these ideas to more general optimization problems @cite_39 . Some other references include @cite_25 @cite_39 @cite_37 @cite_16 .
|
{
"cite_N": [
"@cite_37",
"@cite_1",
"@cite_39",
"@cite_19",
"@cite_51",
"@cite_47",
"@cite_16",
"@cite_13",
"@cite_25"
],
"mid": [
"1976661283",
"2064412498",
"2117686388",
"2048305372",
"2167820643",
"1988844573",
"2950353378",
"2059044337",
"2036835849"
],
"abstract": [
"The Kaczmarz algorithm is an iterative method for reconstructing a signal x∈ℝ d from an overcomplete collection of linear measurements y n =〈x,φ n 〉, n≥1. We prove quantitative bounds on the rate of almost sure exponential convergence in the Kaczmarz algorithm for suitable classes of random measurement vectors ( n _ n=1 ^ R ^ d ). Refined convergence results are given for the special case when each φ n has i.i.d. Gaussian entries and, more generally, when each φ n ∥φ n ∥ is uniformly distributed on ( S ^ d-1 ). This work on almost sure convergence complements the mean squared error analysis of Strohmer and Vershynin for randomized versions of the Kaczmarz algorithm.",
"The Kaczmarz method is an iterative algorithm for solving systems of linear equations Ax=b. Theoretical convergence rates for this algorithm were largely unknown until recently when work was done on a randomized version of the algorithm. It was proved that for overdetermined systems, the randomized Kaczmarz method converges with expected exponential rate, independent of the number of equations in the system. Here we analyze the case where the system Ax=b is corrupted by noise, so we consider the system Ax≈b+r where r is an arbitrary error vector. We prove that in this noisy version, the randomized method reaches an error threshold dependent on the matrix A with the same rate as in the error-free case. We provide examples showing our results are sharp in the general context.",
"In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an ( )-accurate solution with probability at least (1- ) in at most (O((n ) (1 )) ) iterations, where (n ) is the number of blocks. This extends recent results of Nesterov (SIAM J Optim 22(2): 341–362, 2012), which cover the smooth case, to composite minimization, while at the same time improving the complexity by the factor of 4 and removing ( ) from the logarithmic term. More importantly, in contrast with the aforementioned work in which the author achieves the results by applying the method to a regularized version of the objective function with an unknown scaling factor, we show that this is not necessary, thus achieving first true iteration complexity bounds. For strongly convex functions the method converges linearly. In the smooth case we also allow for arbitrary probability vectors and non-Euclidean norms. Finally, we demonstrate numerically that the algorithm is able to solve huge-scale ( _1 )-regularized least squares problems with a billion variables.",
"The Kaczmarz method for solving linear systems of equations is an iterative algorithm that has found many applications ranging from computer tomography to digital signal processing. Despite the popularity of this method, useful theoretical estimates for its rate of convergence are still scarce. We introduce a randomized version of the Kaczmarz method for consistent, overdetermined linear systems and we prove that it converges with expected exponential rate. Furthermore, this is the first solver whose rate does not depend on the number of equations in the system. The solver does not even need to know the whole system but only a small random part of it. It thus outperforms all previously known methods on general extremely overdetermined systems. Even for moderately overdetermined systems, numerical simulations as well as theoretical analysis reveal that our algorithm can converge faster than the celebrated conjugate gradient algorithm. Furthermore, our theory and numerical simulations confirm a prediction of in the context of reconstructing bandlimited functions from nonuniform sampling.",
"We study randomized variants of two classical algorithms: coordinate descent for systems of linear equations and iterated projections for systems of linear inequalities. Expanding on a recent randomized iterated projection algorithm of Strohmer and Vershynin (Strohmer, T., R. Vershynin. 2009. A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 15 262–278) for systems of linear equations, we show that, under appropriate probability distributions, the linear rates of convergence (in expectation) can be bounded in terms of natural linear-algebraic condition numbers for the problems. We relate these condition measures to distances to ill-posedness and discuss generalizations to convex systems under metric regularity assumptions.",
"We present a randomized iterative algorithm that exponentially converges in the mean square to the minimum @math -norm least squares solution of a given linear system of equations. The expected number of arithmetic operations required to obtain an estimate of given accuracy is proportional to the squared condition number of the system multiplied by the number of nonzero entries of the input matrix. The proposed algorithm is an extension of the randomized Kaczmarz method that was analyzed by Strohmer and Vershynin.",
"We present a Projection onto Convex Sets (POCS) type algorithm for solving systems of linear equations. POCS methods have found many applications ranging from computer tomography to digital signal and image processing. The Kaczmarz method is one of the most popular solvers for overdetermined systems of linear equations due to its speed and simplicity. Here we introduce and analyze an extension of the Kaczmarz method that iteratively projects the estimate onto a solution space given by two randomly selected rows. We show that this projection algorithm provides exponential convergence to the solution in expectation. The convergence rate improves upon that of the standard randomized Kaczmarz method when the system has correlated rows. Experimental results confirm that in this case our method significantly outperforms the randomized Kaczmarz method.",
"There exist many classes of block-projections algorithms for approximating solutions of linear least-squares problems. Generally, these methods generate sequences convergent to the minimal norm least-squares solution only for consistent problems. In the inconsistent case, which usually appears in practice because of some approximations or measurements, these sequences do no longer converge to a least-squares solution or they converge to the minimal norm solution of a “perturbed” problem. In the present paper, we overcome this difficulty by constructing extensions for almost all the above classes of block-projections methods. We prove that the sequences generated with these extensions always converge to a least-squares solution and, with a suitable initial approximation, to the minimal norm solution of the problem. Numerical experiments, described in the last section of the paper, confirm the theoretical results obtained.",
"The Kaczmarz method is an algorithm for finding the solution to an overdetermined consistent system of linear equations Ax?=?b by iteratively projecting onto the solution spaces. The randomized version put forth by Strohmer and Vershynin yields provably exponential convergence in expectation, which for highly overdetermined systems even outperforms the conjugate gradient method. In this article we present a modified version of the randomized Kaczmarz method which at each iteration selects the optimal projection from a randomly chosen set, which in most cases significantly improves the convergence rate. We utilize a Johnson---Lindenstrauss dimension reduction technique to keep the runtime on the same order as the original randomized version, adding only extra preprocessing time. We present a series of empirical studies which demonstrate the remarkable acceleration in convergence to the solution using this modified approach."
]
}
|
1208.3805
|
1977769089
|
The block Kaczmarz method is an iterative scheme for solving overdetermined least-squares problems. At each step, the algorithm projects the current iterate onto the solution space of a subset of the constraints. This paper describes a block Kaczmarz algorithm that uses a randomized control scheme to choose the subset at each step. This algorithm is the first block Kaczmarz method with an (expected) linear rate of convergence that can be expressed in terms of the geometric properties of the matrix and its submatrices. The analysis reveals that the algorithm is most effective when it is given a good row paving of the matrix, a partition of the rows into well-conditioned blocks. The operator theory literature provides detailed information about the existence and construction of good row pavings. Together, these results yield an efficient block Kaczmarz scheme that applies to many overdetermined least-squares problem.
|
The block Kaczmarz update rule we are studying is originally due to Elfving [Eqn. (2.2)] Elf80:Block-Iterative-Methods . This update is a special case of a general framework due to @cite_48 . Byrne describes a number of other block Kaczmarz methods in his book [Chap. 9] Byr08:Applied-Iterative .
|
{
"cite_N": [
"@cite_48"
],
"mid": [
"2053342145"
],
"abstract": [
"Abstract We present a unifying framework for a wide class of iterative methods in numerical linear algebra. In particular, the class of algorithms contains Kaczmarz's and Richardson's methods for the regularized weighted least squares problem with weighted norm. The convergence theory for this class of algorithms yields as corollaries the usual convergence conditions for Kaczmarz's and Richardson's methods. The algorithms in the class may be characterized as being group-iterative, and incorporate relaxation matrices, as opposed to a single relaxation parameter. We show that some well-known iterative methods of image reconstruction fall into the class of algorithms under consideration, and are thus covered by the convergence theory. We also describe a novel application to truly three-dimensional image reconstruction."
]
}
|
1208.3805
|
1977769089
|
The block Kaczmarz method is an iterative scheme for solving overdetermined least-squares problems. At each step, the algorithm projects the current iterate onto the solution space of a subset of the constraints. This paper describes a block Kaczmarz algorithm that uses a randomized control scheme to choose the subset at each step. This algorithm is the first block Kaczmarz method with an (expected) linear rate of convergence that can be expressed in terms of the geometric properties of the matrix and its submatrices. The analysis reveals that the algorithm is most effective when it is given a good row paving of the matrix, a partition of the rows into well-conditioned blocks. The operator theory literature provides detailed information about the existence and construction of good row pavings. Together, these results yield an efficient block Kaczmarz scheme that applies to many overdetermined least-squares problem.
|
Most of the literature on block Kaczmarz methods assumes that a partition @math of the rows of the matrix is provided as part of the problem data. We are aware of some research on the prospects for partitioning a matrix in a manner that is favorable for block Kaczmarz methods. In particular, Popa @cite_17 has introduced an algorithm for partitioning a sparse matrix so that each block contains mutually orthogonal rows. Popa has pursued this idea in a sequence of papers, including @cite_46 @cite_30 . We believe that our work is the first to recognize the natural connection between the paving literature and the block Kaczmarz method.
|
{
"cite_N": [
"@cite_30",
"@cite_46",
"@cite_17"
],
"mid": [
"24152352",
"1500306269",
"1516786830"
],
"abstract": [
"In this paper we describe an iterative algorithm for numerical solution of ill-conditioned inconsistent symmetric linear least-squares problems arising from collocation discretization of first kind integral equations. It is constructed by successive application of Kaczmarz Extended method and an appropriate version of Kovarik’s approximate orthogonalization algorithm. In this way we obtain a preconditioned version of Kaczmarz algorithm for which we prove convergence and make an analysis concerning the computational effort per iteration. Numerical experiments are also presented. AMS Subject Classification : 65F10 , 65F20. 1 Kaczmarz extended and Kovarik algorithms Beside many papers and books concerned with the qualitative analysis of classes of linear and nonlinear operators and operatorial equations, professor Dan Pascali also analysed the possibility to approximate solutions for some of them (see e.g. [5], [6]). This paper is written in the same direction, by considering iterative methods for numerical solution of first kind integral equations",
"In some previous papers the author extended two algorithms proposed by Z. Kovarik for approximate orthogonalization of a finite set of linearly independent vectors from a Hilbert space, to the case when the vectors are rows (not necessary linearly independent) of an arbitrary rectangular matrix. In this paper we describe combinations between these two methods and the classical Kaczmarz’s iteration. We prove that, in the case of a consistent least-squares problem, the new algorithms so obtained converge to any of its solutions (depending on the initial approximation). The numerical experiments described in the last section of the paper on a problem obtained after the discretization of a first kind integral equation ilustrate the fast convergence of the new algorithms.",
"In this paper we present an algorithm which, for a given (sparse) matrix, constructs a partition of its set of row-indices, such that each subset of this partition (except the last one obtained) contains indices which correspond to mutually orthogonal rows. We then use such decompositions in some classes of block-projections methods, previously extended by the author to general inconsistent linear least-squares problems. Numerical experiments on an inconsistent and rank-deficient least-squares model problem are described in the last section of the paper."
]
}
|
1208.3530
|
1486959242
|
The New York Public Library is participating in the Chronicling America initiative to develop an online searchable database of historically significant newspaper articles. Microfilm copies of the newspapers are scanned and high resolution Optical Character Recognition (OCR) software is run on them. The text from the OCR provides a wealth of data and opinion for researchers and historians. However, categorization of articles provided by the OCR engine is rudimentary and a large number of the articles are labeled editorial without further grouping. Manually sorting articles into fine-grained categories is time consuming if not impossible given the size of the corpus. This paper studies techniques for automatic categorization of newspaper articles so as to enhance search and retrieval on the archive. We explore unsupervised (e.g. KMeans) and semi-supervised (e.g. constrained clustering) learning algorithms to develop article categorization schemes geared towards the needs of end-users. A pilot study was designed to understand whether there was unanimous agreement amongst patrons regarding how articles can be categorized. It was found that the task was very subjective and consequently automated algorithms that could deal with subjective labels were used. While the small scale pilot study was extremely helpful in designing machine learning algorithms, a much larger system needs to be developed to collect annotations from users of the archive. The "BODHI" system currently being developed is a step in that direction, allowing users to correct wrongly scanned OCR and providing keywords and tags for newspaper articles used frequently. On successful implementation of the beta version of this system, we hope that it can be integrated with existing software being developed for the Chronicling America project.
|
In semi-supervised clustering algorithms, labeled data has been used to provide iterative feedback @cite_31 and conditional distributions in auxiliary space @cite_62 . Seeding mechanisms for semi-supervised clustering have been studied in @cite_2 , @cite_20 . @cite_56 were able to show that by allowing instance-level constraints to have space-level inductive constraints, improved methods of clustering can be obtained with very limited supervisory information.
|
{
"cite_N": [
"@cite_62",
"@cite_56",
"@cite_2",
"@cite_31",
"@cite_20"
],
"mid": [
"2111389053",
"1596382552",
"2102524069",
"1564583583",
"2134089414"
],
"abstract": [
"We study the problem of learning groups or categories that are local in the continuous primary space but homogeneous by the distributions of an associated auxiliary random variable over a discrete auxiliary space. Assuming that variation in the auxiliary space is meaningful, categories will emphasize similarly meaningful aspects of the primary space. From a data set consisting of pairs of primary and auxiliary items, the categories are learned by minimizing a Kullback-Leibler divergence-based distortion between (implicitly estimated) distributions of the auxiliary data, conditioned on the primary data. Still, the categories are defined in terms of the primary space. An online algorithm resembling the traditional Hebb-type competitive learning is introduced for learning the categories. Minimizing the distortion criterion turns out to be equivalent to maximizing the mutual information between the categories and the auxiliary data. In addition, connections to density estimation and to the distributional clustering paradigm are outlined. The method is demonstrated by clustering yeast gene expression data from DNA chips, with biological knowledge about the functional classes of the genes as the auxiliary data.",
"We present an improved method for clustering in the presence of very limited supervisory information, given as pairwise instance constraints. By allowing instance-level constraints to have space-level inductive implications, we are able to successfully incorporate constraints for a wide range of data set types. Our method greatly improves on the previously studied constrained k-means algorithm, generally requiring less than half as many constraints to achieve a given accuracy on a range of real-world data, while also being more robust when over-constrained. We additionally discuss an active learning algorithm which increases the value of constraints even further.",
"",
"We present an approach to clustering based on the observa- tion that \"it is easier to criticize than to construct.\" Our approach of semi- supervised clustering allows a user to iteratively provide feedback to a clus- tering algorithm. The feedback is incorporated in the form of constraints, which the clustering algorithm attempts to satisfy on future iterations. These constraints allow the user to guide the clusterer toward clusterings of the data that the user finds more useful. We demonstrate semi-supervised clustering with a system that learns to cluster news stories from a Reuters data set. 1",
"Clustering is traditionally viewed as an unsupervised method for data analysis. However, in some cases information about the problem domain is available in addition to the data instances themselves. In this paper, we demonstrate how the popular k-means clustering algorithm can be protably modied to make use of this information. In experiments with articial constraints on six data sets, we observe improvements in clustering accuracy. We also apply this method to the real-world problem of automatically detecting road lanes from GPS data and observe dramatic increases in performance."
]
}
|
1208.3398
|
2949873240
|
The dynamics of an agreement protocol interacting with a disagreement process over a common random network is considered. The model can represent the spreading of true and false information over a communication network, the propagation of faults in a large-scale control system, or the development of trust and mistrust in a society. At each time instance and with a given probability, a pair of network nodes are selected to interact. At random each of the nodes then updates its state towards the state of the other node (attraction), away from the other node (repulsion), or sticks to its current state (neglect). Agreement convergence and disagreement divergence results are obtained for various strengths of the updates for both symmetric and asymmetric update rules. Impossibility theorems show that a specific level of attraction is required for almost sure asymptotic agreement and a specific level of repulsion is required for almost sure asymptotic disagreement. A series of sufficient and or necessary conditions are then established for agreement convergence or disagreement divergence. In particular, under symmetric updates, a critical convergence measure in the attraction and repulsion update strength is found, in the sense that the asymptotic property of the network state evolution transits from agreement convergence to disagreement divergence when this measure goes from negative to positive. The result can be interpreted as a tight bound on how much bad action needs to be injected in a dynamic network in order to consistently steer its overall behavior away from consensus.
|
The structure of complex networks, and the dynamics of the internal states of the nodes in these networks, are two fundamental issues in the study of network science @cite_54 @cite_17 .
|
{
"cite_N": [
"@cite_54",
"@cite_17"
],
"mid": [
"2147824439",
"2101420429"
],
"abstract": [
"A process for finishing keratinous material, especially rendering it shrink-resistant or imparting to it durably pressed effects, comprises 1. TREATING THE MATERIAL WITH A POLYTHIOL ESTER OF THE FORMULA [OH]q(s) ¦ R1-(CO)rO(CO)sR2SH]p ¦ [COOH]q(r) WHERE R1 represents an aliphatic or araliphatic hydrocarbon radical of at least 2 carbon atoms, which may contain not more than one ether oxygen atom, R2 represents a hydrocarbon radical, P IS AN INTEGER OF FROM 2 TO 6, Q IS ZERO OR A POSITIVE INTEGER OF AT MOST 3, SUCH THAT (P + Q) IS AT MOST 6, AND R AND S EACH REPRESENT ZERO OR 1 BUT ARE NOT THE SAME, AND 2. CURING THE POLYTHIOL ESTER ON THE MATERIAL BY MEANS OF A POLYENE CONTAINING, PER AVERAGE MOLECULE, AT LEAST TWO ETHYLENIC DOUBLE BONDS EACH beta TO AN OXYGEN, NITROGEN, OR SULFUR ATOM, THE SUM OF SUCH ETHYLENIC DOUBLE BONDS IN THE POLYENE AND OF THE MERCAPTAN GROUPS IN THE POLYTHIOL ESTER BEING MORE THAN 4 AND THE COMBINED WEIGHT OF THE POLYENE AND THE POLYTHIOL ESTER BEING FROM 0.5 TO 15 BY WEIGHT OF THE KERATINOUS MATERIAL TREATED.",
"The ultimate proof of our understanding of natural or technological systems is reflected in our ability to control them. Although control theory offers mathematical tools for steering engineered and natural systems towards a desired state, a framework to control complex self-organized systems is lacking. Here we develop analytical tools to study the controllability of an arbitrary complex directed network, identifying the set of driver nodes with time-dependent control that can guide the system’s entire dynamics. We apply these tools to several real networks, finding that the number of driver nodes is determined mainly by the network’s degree distribution. We show that sparse inhomogeneous networks, which emerge in many real complex systems, are the most difficult to control, but that dense and homogeneous networks can be controlled using a few driver nodes. Counterintuitively, we find that in both model and real systems the driver nodes tend to avoid the high-degree nodes. Control theory can be used to steer engineered and natural systems towards a desired state, but a framework to control complex self-organized systems is lacking. Can such networks be controlled? Albert-Laszlo Barabasi and colleagues tackle this question and arrive at precise mathematical answers that amount to 'yes, up to a point'. They develop analytical tools to study the controllability of an arbitrary complex directed network using both model and real systems, ranging from regulatory, neural and metabolic pathways in living organisms to food webs, cell-phone movements and social interactions. They identify the minimum set of driver nodes whose time-dependent control can guide the system's entire dynamics ( http: go.nature.com wd9Ek2 ). Surprisingly, these are not usually located at the network hubs."
]
}
|
1208.3398
|
2949873240
|
The dynamics of an agreement protocol interacting with a disagreement process over a common random network is considered. The model can represent the spreading of true and false information over a communication network, the propagation of faults in a large-scale control system, or the development of trust and mistrust in a society. At each time instance and with a given probability, a pair of network nodes are selected to interact. At random each of the nodes then updates its state towards the state of the other node (attraction), away from the other node (repulsion), or sticks to its current state (neglect). Agreement convergence and disagreement divergence results are obtained for various strengths of the updates for both symmetric and asymmetric update rules. Impossibility theorems show that a specific level of attraction is required for almost sure asymptotic agreement and a specific level of repulsion is required for almost sure asymptotic disagreement. A series of sufficient and or necessary conditions are then established for agreement convergence or disagreement divergence. In particular, under symmetric updates, a critical convergence measure in the attraction and repulsion update strength is found, in the sense that the asymptotic property of the network state evolution transits from agreement convergence to disagreement divergence when this measure goes from negative to positive. The result can be interpreted as a tight bound on how much bad action needs to be injected in a dynamic network in order to consistently steer its overall behavior away from consensus.
|
Probabilistic models for networks such as random graphs, provide an important and convenient means for modeling large-scale systems, and have found numerous applications in various fields of science. The classical Erd " o s--R ' e nyi model, in which each edge exists randomly and independently of others with a given probability, was studied in @cite_1 . The degree distribution of the Erd " o s--R ' e nyi graph is asymptotically Poisson. Generalized models were proposed in @cite_53 and @cite_20 , for which the degree distribution satisfies certain power law that better matches the properties of real-life networks such as the Internet. A detailed introduction to the structure of random networks can be found in @cite_57 @cite_54 .
|
{
"cite_N": [
"@cite_53",
"@cite_54",
"@cite_1",
"@cite_57",
"@cite_20"
],
"mid": [
"2112090702",
"2147824439",
"2908457301",
"1627599966",
"2008620264"
],
"abstract": [
"Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.",
"A process for finishing keratinous material, especially rendering it shrink-resistant or imparting to it durably pressed effects, comprises 1. TREATING THE MATERIAL WITH A POLYTHIOL ESTER OF THE FORMULA [OH]q(s) ¦ R1-(CO)rO(CO)sR2SH]p ¦ [COOH]q(r) WHERE R1 represents an aliphatic or araliphatic hydrocarbon radical of at least 2 carbon atoms, which may contain not more than one ether oxygen atom, R2 represents a hydrocarbon radical, P IS AN INTEGER OF FROM 2 TO 6, Q IS ZERO OR A POSITIVE INTEGER OF AT MOST 3, SUCH THAT (P + Q) IS AT MOST 6, AND R AND S EACH REPRESENT ZERO OR 1 BUT ARE NOT THE SAME, AND 2. CURING THE POLYTHIOL ESTER ON THE MATERIAL BY MEANS OF A POLYENE CONTAINING, PER AVERAGE MOLECULE, AT LEAST TWO ETHYLENIC DOUBLE BONDS EACH beta TO AN OXYGEN, NITROGEN, OR SULFUR ATOM, THE SUM OF SUCH ETHYLENIC DOUBLE BONDS IN THE POLYENE AND OF THE MERCAPTAN GROUPS IN THE POLYTHIOL ESTER BEING MORE THAN 4 AND THE COMBINED WEIGHT OF THE POLYENE AND THE POLYTHIOL ESTER BEING FROM 0.5 TO 15 BY WEIGHT OF THE KERATINOUS MATERIAL TREATED.",
"",
"Only recently did mankind realise that it resides in a world of networks. The Internet and World Wide Web are changing our life. Our physical existence is based on various biological networks. We have recently learned that the term \"network\" turns out to be a central notion in our time, and the consequent explosion of interest in networks is a social and cultural phenomenon. The principles of the complex organization and evolution of networks, natural and artificial, are the topic of this book, which is written by physicists and is addressed to all involved researchers and students. The aim of the text is to understand networks and the basic principles of their structural organization and evolution. The ideas are presented in a clear and a pedagogical way, with minimal mathematics, so even students without a deep knowledge of mathematics and statistical physics will be able to rely on this as a reference. Special attention is given to real networks, both natural and artifical. Collected empirical data and numerous real applications of existing theories are discussed in detail, as well as the topical problems of communication networks. Available in OSO: http: www.oxfordscholarship.com oso public content physics 9780198515906 toc.html",
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems."
]
}
|
1208.3398
|
2949873240
|
The dynamics of an agreement protocol interacting with a disagreement process over a common random network is considered. The model can represent the spreading of true and false information over a communication network, the propagation of faults in a large-scale control system, or the development of trust and mistrust in a society. At each time instance and with a given probability, a pair of network nodes are selected to interact. At random each of the nodes then updates its state towards the state of the other node (attraction), away from the other node (repulsion), or sticks to its current state (neglect). Agreement convergence and disagreement divergence results are obtained for various strengths of the updates for both symmetric and asymmetric update rules. Impossibility theorems show that a specific level of attraction is required for almost sure asymptotic agreement and a specific level of repulsion is required for almost sure asymptotic disagreement. A series of sufficient and or necessary conditions are then established for agreement convergence or disagreement divergence. In particular, under symmetric updates, a critical convergence measure in the attraction and repulsion update strength is found, in the sense that the asymptotic property of the network state evolution transits from agreement convergence to disagreement divergence when this measure goes from negative to positive. The result can be interpreted as a tight bound on how much bad action needs to be injected in a dynamic network in order to consistently steer its overall behavior away from consensus.
|
When a networked information is executed on top of an underlying network, nodes are endowed with internal states that evolve as nodes interact. The dynamics of the node states depend on the particular problem under investigation. For instance, the boids model was introduced in @cite_11 to model swarm behavior and animal groups, followed by Vicsek's model in @cite_9 . Models of opinion dynamics in social networks were considered in @cite_45 @cite_41 @cite_30 and the dynamics of communication protocols in @cite_32 . Distributed averaging or consensus algorithms have relative simple dynamics for the network state evolution and serve as a basic model for the complex interaction between node state dynamics and the dynamics of the underlying communication graph.
|
{
"cite_N": [
"@cite_30",
"@cite_41",
"@cite_9",
"@cite_32",
"@cite_45",
"@cite_11"
],
"mid": [
"2120015072",
"",
"2015410655",
"",
"1998692453",
"2150312211"
],
"abstract": [
"We provide a model to investigate the tension between information aggregation and spread of misinformation. Individuals meet pairwise and exchange information, which is modeled as both individuals adopting the average of their pre-meeting beliefs. \"Forceful\" agents influence the beliefs of (some of) the other individuals they meet, but do not change their own opinions. We characterize how the presence of forceful agents interferes with information aggregation. Under the assumption that even forceful agents obtain some information from others, we first show that all beliefs converge to a stochastic consensus. Our main results quantify the extent of misinformation by providing bounds or exact results on the gap between the consensus value and the benchmark without forceful agents (where there is efficient information aggregation). The worst outcomes obtain when there are several forceful agents who update their beliefs only on the basis of information from individuals that have been influenced by them.",
"",
"A simple model with a novel type of dynamics is introduced in order to investigate the emergence of self-ordered motion in systems of particles with biologically motivated interaction. In our model particles are driven with a constant absolute velocity and at each time step assume the average direction of motion of the particles in their neighborhood with some random perturbation @math added. We present numerical evidence that this model results in a kinetic phase transition from no transport (zero average velocity, @math ) to finite net transport through spontaneous symmetry breaking of the rotational symmetry. The transition is continuous, since @math is found to scale as @math with @math .",
"",
"Abstract Consider a group of individuals who must act together as a team or committee, and suppose that each individual in the group has his own subjective probability distribution for the unknown value of some parameter. A model is presented which describes how the group might reach agreement on a common subjective probability distribution for the parameter by pooling their individual opinions. The process leading to the consensus is explicitly described and the common distribution that is reached is explicitly determined. The model can also be applied to problems of reaching a consensus when the opinion of each member of the group is represented simply as a point estimate of the parameter rather than as a probability distribution.",
"The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful and familiar part of the natural world. But this type of complex motion is rarely seen in computer animation. This paper explores an approach based on simulation as an alternative to scripting the paths of each bird individually. The simulated flock is an elaboration of a particle systems, with the simulated birds being the particles. The aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course. Each simulated bird is implemented as an independent actor that navigates according to its local perception of the dynamic environment, the laws of simulated physics that rule its motion, and a set of behaviors programmed into it by the \"animator.\" The aggregate motion of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual simulated birds."
]
}
|
1208.3398
|
2949873240
|
The dynamics of an agreement protocol interacting with a disagreement process over a common random network is considered. The model can represent the spreading of true and false information over a communication network, the propagation of faults in a large-scale control system, or the development of trust and mistrust in a society. At each time instance and with a given probability, a pair of network nodes are selected to interact. At random each of the nodes then updates its state towards the state of the other node (attraction), away from the other node (repulsion), or sticks to its current state (neglect). Agreement convergence and disagreement divergence results are obtained for various strengths of the updates for both symmetric and asymmetric update rules. Impossibility theorems show that a specific level of attraction is required for almost sure asymptotic agreement and a specific level of repulsion is required for almost sure asymptotic disagreement. A series of sufficient and or necessary conditions are then established for agreement convergence or disagreement divergence. In particular, under symmetric updates, a critical convergence measure in the attraction and repulsion update strength is found, in the sense that the asymptotic property of the network state evolution transits from agreement convergence to disagreement divergence when this measure goes from negative to positive. The result can be interpreted as a tight bound on how much bad action needs to be injected in a dynamic network in order to consistently steer its overall behavior away from consensus.
|
Convergence to agreement for averaging algorithms have been extensively studied in the literature. Early results were developed in a general setting for studying the ergodicity of nonhomogeneous Markov chains @cite_46 @cite_18 . Deterministic models have been investigated in finding proper connectivity conditions that ensure consensus convergence @cite_27 @cite_24 @cite_36 @cite_22 @cite_55 @cite_2 @cite_31 @cite_3 @cite_51 . Averaging algorithms over random graphs have also been considered @cite_37 @cite_56 @cite_0 @cite_8 @cite_14 @cite_4 @cite_42 .
|
{
"cite_N": [
"@cite_18",
"@cite_31",
"@cite_37",
"@cite_22",
"@cite_14",
"@cite_8",
"@cite_36",
"@cite_4",
"@cite_55",
"@cite_42",
"@cite_3",
"@cite_56",
"@cite_24",
"@cite_0",
"@cite_27",
"@cite_2",
"@cite_46",
"@cite_51"
],
"mid": [
"2032803680",
"",
"2115428923",
"",
"",
"2134252677",
"",
"",
"",
"",
"",
"",
"",
"",
"2154834860",
"",
"",
"2099589402"
],
"abstract": [
"exists and all the rows of Q are the same. SIA matrices are defined differently in books on probability theory; see, for example, [1] or [2]. The latter definition is more intuitive, takes longer to state, is easier to verify, and explains why the probabilist is interested in SIA matrices. A theorem in probability theory or matrix theory then says that the customary definition is equivalent to the one we have given. The latter is brief and emphasizes the property which will interest us in this note. We define S(P) by",
"",
"We consider the agreement problem over random information networks. In a random network, the existence of an information channel between a pair of elements at each time instance is probabilistic and independent of other channels; hence, the topology of the network varies over time. In such a framework, we address the asymptotic agreement for the networked elements via notions from stochastic stability. Furthermore, we delineate on the rate of convergence as it relates to the algebraic connectivity of random graphs.",
"",
"",
"Various randomized consensus algorithms have been proposed in the literature. In some case randomness is due to the choice of a randomized network communication protocol. In other cases, randomness is simply caused by the potential unpredictability of the environment in which the distributed consensus algorithm is implemented. Conditions ensuring the convergence of these algorithms have already been proposed in the literature. As far as the rate of convergence of such algorithms, two approaches can be proposed. One is based on a mean square analysis, while a second is based on the concept of Lyapunov exponent. In this paper, by some concentration results, we prove that the mean square convergence analysis is the right approach when the number of agents is large. Differently from the existing literature, in this paper we do not stick to average preserving algorithms. Instead, we allow to reach consensus at a point which may differ from the average of the initial states. The advantage of such algorithms is that they do not require bidirectional communication among agents and thus they apply to more general contexts. Moreover, in many important contexts it is possible to prove that the displacement from the initial average tends to zero, when the number of agents goes to infinity.",
"",
"",
"",
"",
"",
"",
"",
"",
"We present a model for asynchronous distributed computation and then proceed to analyze the convergence of natural asynchronous distributed versions of a large class of deterministic and stochastic gradient-like algorithms. We show that such algorithms retain the desirable convergence properties of their centralized counterparts, provided that the time between consecutive interprocessor communications and the communication delays are not too large.",
"",
"",
"We study a model of opinion dynamics introduced by Krause: each agent has an opinion represented by a real number, and updates its opinion by averaging all agent opinions that differ from its own by less than one. We give a new proof of convergence into clusters of agents, with all agents in the same cluster holding the same opinion. We then introduce a particular notion of equilibrium stability and provide lower bounds on the inter-cluster distances at a stable equilibrium. To better understand the behavior of the system when the number of agents is large, we also introduce and study a variant involving a continuum of agents, obtaining partial convergence results and lower bounds on inter-cluster distances, under some mild assumptions."
]
}
|
1208.3398
|
2949873240
|
The dynamics of an agreement protocol interacting with a disagreement process over a common random network is considered. The model can represent the spreading of true and false information over a communication network, the propagation of faults in a large-scale control system, or the development of trust and mistrust in a society. At each time instance and with a given probability, a pair of network nodes are selected to interact. At random each of the nodes then updates its state towards the state of the other node (attraction), away from the other node (repulsion), or sticks to its current state (neglect). Agreement convergence and disagreement divergence results are obtained for various strengths of the updates for both symmetric and asymmetric update rules. Impossibility theorems show that a specific level of attraction is required for almost sure asymptotic agreement and a specific level of repulsion is required for almost sure asymptotic disagreement. A series of sufficient and or necessary conditions are then established for agreement convergence or disagreement divergence. In particular, under symmetric updates, a critical convergence measure in the attraction and repulsion update strength is found, in the sense that the asymptotic property of the network state evolution transits from agreement convergence to disagreement divergence when this measure goes from negative to positive. The result can be interpreted as a tight bound on how much bad action needs to be injected in a dynamic network in order to consistently steer its overall behavior away from consensus.
|
The model we introduce and analyze in this paper can be viewed as an extension to the model discussed by @cite_30 , who used a gossip algorithm to describe the spread of misinformation in social networks. In their model, the state of each node is viewed as its belief and the randomized gossip algorithm characterizes the dynamics of the belief evolution. We believe that our model is one of the first to consider faulty and misbehaving nodes in gossip algorithms. While the distributed systems community has since long recognized the need to provide fault tolerant systems, e.g. , @cite_16 @cite_10 , efforts to provide similar results for randomized gossiping algorithms have so far been limited. This paper aims at providing such results.
|
{
"cite_N": [
"@cite_30",
"@cite_16",
"@cite_10"
],
"mid": [
"2120015072",
"2126924915",
"2126906505"
],
"abstract": [
"We provide a model to investigate the tension between information aggregation and spread of misinformation. Individuals meet pairwise and exchange information, which is modeled as both individuals adopting the average of their pre-meeting beliefs. \"Forceful\" agents influence the beliefs of (some of) the other individuals they meet, but do not change their own opinions. We characterize how the presence of forceful agents interferes with information aggregation. Under the assumption that even forceful agents obtain some information from others, we first show that all beliefs converge to a stochastic consensus. Our main results quantify the extent of misinformation by providing bounds or exact results on the gap between the consensus value and the benchmark without forceful agents (where there is efficient information aggregation). The worst outcomes obtain when there are several forceful agents who update their beliefs only on the basis of information from individuals that have been influenced by them.",
"The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor. It is shown that the problem is solvable for, and only for, n ≥ 3 m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.",
"This paper considers a variant of the Byzantine Generals problem, in which processes start with arbitrary real values rather than Boolean values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal. Algorithms are presented to reach approximate agreement in asynchronous, as well as synchronous systems. The asynchronous agreement algorithm is an interesting contrast to a result of , who show that exact agreement with guaranteed termination is not attainable in an asynchronous system with as few as one faulty process. The algorithms work by successive approximation, with a provable convergence rate that depends on the ratio between the number of faulty processes and the total number of processes. Lower bounds on the convergence rate for algorithms of this form are proved, and the algorithms presented are shown to be optimal."
]
}
|
1208.2773
|
1496408745
|
Record linkage has been extensively used in various data mining applications involving sharing data. While the amount of available data is growing, the concern of disclosing sensitive information poses the problem of utility vs privacy. In this paper, we study the problem of private record linkage via secure data transformations. In contrast to the existing techniques in this area, we propose a novel approach that provides strong privacy guarantees under the formal framework of differential privacy. We develop an embedding strategy based on frequent variable length grams mined in a private way from the original data. We also introduce personalized threshold for matching individual records in the embedded space which achieves better linkage accuracy than the existing global threshold approach. Compared with the state-of-the-art secure matching schema, our approach provides formal, provable privacy guarantees and achieves better scalability while providing comparable utility.
|
In this section, we present an overview of the techniques on privacy preserving record linkage. We carefully illustrate the secure mapping mechanism proposed by @cite_25 that represents the closest technique to our approach.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2024652123"
],
"abstract": [
"In many business scenarios, record matching is performed across different data sources with the aim of identifying common information shared among these sources. However such need is often in contrast with privacy requirements concerning the data stored by the sources. In this paper, we propose a protocol for record matching that preserves privacy both at the data level and at the schema level. Specifically, if two sources need to identify their common data, by running the protocol they can compute the matching of their datasets without sharing their data in clear and only sharing the result of the matching. The protocol uses a third party, and maps records into a vector space in order to preserve their privacy. Experimental results show the efficiency of the matching protocol in terms of precision and recall as well as the good computational performance."
]
}
|
1208.2773
|
1496408745
|
Record linkage has been extensively used in various data mining applications involving sharing data. While the amount of available data is growing, the concern of disclosing sensitive information poses the problem of utility vs privacy. In this paper, we study the problem of private record linkage via secure data transformations. In contrast to the existing techniques in this area, we propose a novel approach that provides strong privacy guarantees under the formal framework of differential privacy. We develop an embedding strategy based on frequent variable length grams mined in a private way from the original data. We also introduce personalized threshold for matching individual records in the embedded space which achieves better linkage accuracy than the existing global threshold approach. Compared with the state-of-the-art secure matching schema, our approach provides formal, provable privacy guarantees and achieves better scalability while providing comparable utility.
|
Secure Multiparty Computation (SMC) techniques cast the record linkage problem into a secure communication framework. In this scenario, several parties are involved in the protocol where the communication is done using cryptography. The key idea is that the computation itself should reveal no more than whatever may be revealed by examining the input and output of each party. An important theoretical result in the cryptographic area @cite_7 shows that any computational functions can be computed in this setting. Motivate by this, several works have been proposed in the literature. For example, when the exact match is considered the record linkage problem can be interpreted as a set intersection problem @cite_26 . To mention, the work in @cite_2 investigates the SMC approach in privacy preserving data mining. While in principle the private record linkage problem can be solved using SMC and cryptography, the computational and communication cost of these methods turn out too great in real application.
|
{
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_2"
],
"mid": [
"2088492763",
"2092422002",
"2116025923"
],
"abstract": [
"In this paper we introduce a new tool for controlling the knowledge transfer process in cryptographic protocol design. It is applied to solve a general class of problems which include most of the two-party cryptographic problems in the literature. Specifically, we show how two parties A and B can interactively generate a random integer N = p?q such that its secret, i.e., the prime factors (p, q), is hidden from either party individually but is recoverable jointly if desired. This can be utilized to give a protocol for two parties with private values i and j to compute any polynomially computable functions f(i,j) and g(i,j) with minimal knowledge transfer and a strong fairness property. As a special case, A and B can exchange a pair of secrets sA, sB, e.g. the factorization of an integer and a Hamiltonian circuit in a graph, in such a way that sA becomes computable by B when and only when sB becomes computable by A. All these results are proved assuming only that the problem of factoring large intergers is computationally intractable.",
"Two millionaires wish to know who is richer; however, they do not want to find out inadvertently any additional information about each other’s wealth. How can they carry out such a conversation? This is a special case of the following general problem. Suppose m people wish to compute the value of a function f(x1, x2, x3, . . . , xm), which is an integer-valued function of m integer variables xi of bounded range. Assume initially person Pi knows the value of xi and no other x’s. Is it possible for them to compute the value of f , by communicating among themselves, without unduly giving away any information about the values of their own variables? The millionaires’ problem corresponds to the case when m = 2 and f(x1, x2) = 1 if x1 < x2, and 0 otherwise. In this paper, we will give precise formulation of this general problem and describe three ways of solving it by use of one-way functions (i.e., functions which are easy to evaluate but hard to invert). These results have applications to secret voting, private querying of database, oblivious negotiation, playing mental poker, etc. We will also discuss the complexity question “How many bits need to be exchanged for the computation”, and describe methods to prevent participants from cheating. Finally, we study the question “What cannot be accomplished with one-way functions”. Before describing these results, we would like to put this work in perspective by first considering a unified view of secure computation in the next section.",
"In this paper, we survey the basic paradigms and notions of secure mul- tiparty computation and discuss their relevance to the fleld of privacy-preserving data mining. In addition to reviewing deflnitions and constructions for secure mul- tiparty computation, we discuss the issue of e-ciency and demonstrate the di-cul- ties involved in constructing highly e-cient protocols. We also present common errors that are prevalent in the literature when secure multiparty computation techniques are applied to privacy-preserving data mining. Finally, we discuss the relationship between secure multiparty computation and privacy-preserving data mining, and show which problems it solves and which problems it does not."
]
}
|
1208.2773
|
1496408745
|
Record linkage has been extensively used in various data mining applications involving sharing data. While the amount of available data is growing, the concern of disclosing sensitive information poses the problem of utility vs privacy. In this paper, we study the problem of private record linkage via secure data transformations. In contrast to the existing techniques in this area, we propose a novel approach that provides strong privacy guarantees under the formal framework of differential privacy. We develop an embedding strategy based on frequent variable length grams mined in a private way from the original data. We also introduce personalized threshold for matching individual records in the embedded space which achieves better linkage accuracy than the existing global threshold approach. Compared with the state-of-the-art secure matching schema, our approach provides formal, provable privacy guarantees and achieves better scalability while providing comparable utility.
|
Hybrid methods combine anonymization or secure transformation techniques with SMC techniques with the aim of reducing SMC cost. @cite_27 proposed a composed strategy based on SMC and sanitization to achieve a trade off between privacy and utility. This work has been further extended in @cite_10 by differentially private blocking followed by SMC techniques for matching record pairs in matched blocks instead of matching all record pairs. On the framework of blocking record linkage, the work in @cite_5 combines machine learning techniques in defining the blocking functions, showing interesting results in the utility. Hybrid techniques provide a good trade-off between privacy and accuracy, the SMC step still involves high computational cost and the impact of the blocking on the linkage accuracy is not clearly understood.
|
{
"cite_N": [
"@cite_5",
"@cite_27",
"@cite_10"
],
"mid": [
"2095273830",
"1968606833",
"2054514509"
],
"abstract": [
"In this paper, the problem of quickly matching records (i.e., record linkage problem) from two autonomous sources without revealing privacy to the other parties is considered. In particular, our focus is to devise secure blocking scheme to improve the performance of record linkage significantly while being secure. Although there have been works on private record linkage, none has considered adopting the blocking framework. Therefore, our proposed blocking-aware private record linkage can perform large-scale record linkage without revealing privacy. Preliminary experimental results showing the potential of the proposal are reported.",
"Real-world entities are not always represented by the same set of features in different data sets. Therefore matching and linking records corresponding to the same real-world entity distributed across these data sets is a challenging task. If the data sets contain private information, the problem becomes even harder due to privacy concerns. Existing solutions of this problem mostly follow two approaches: sanitization techniques and cryptographic techniques. The former achieves privacy by perturbing sensitive data at the expense of degrading matching accuracy. The later, on the other hand, attains both privacy and high accuracy under heavy communication and computation costs. In this paper, we propose a method that combines these two approaches and enables users to trade off between privacy, accuracy and cost. Experiments conducted on real data sets show that our method has significantly lower costs than cryptographic techniques and yields much more accurate matching results compared to sanitization techniques, even when the data sets are perturbed extensively.",
"Private matching between datasets owned by distinct parties is a challenging problem with several applications. Private matching allows two parties to identify the records that are close to each other according to some distance functions, such that no additional information other than the join result is disclosed to any party. Private matching can be solved securely and accurately using secure multi-party computation (SMC) techniques, but such an approach is prohibitively expensive in practice. Previous work proposed the release of sanitized versions of the sensitive datasets which allows blocking, i.e., filtering out sub-sets of records that cannot be part of the join result. This way, SMC is applied only to a small fraction of record pairs, reducing the matching cost to acceptable levels. The blocking step is essential for the privacy, accuracy and efficiency of matching. However, the state-of-the-art focuses on sanitization based on k-anonymity, which does not provide sufficient privacy. We propose an alternative design centered on differential privacy, a novel paradigm that provides strong privacy guarantees. The realization of the new model presents difficult challenges, such as the evaluation of distance-based matching conditions with the help of only a statistical queries interface. Specialized versions of data indexing structures (e.g., kd-trees) also need to be devised, in order to comply with differential privacy. Experiments conducted on the real-world Census-income dataset show that, although our methods provide strong privacy, their effectiveness in reducing matching cost is not far from that of k-anonymity based counterparts."
]
}
|
1208.3291
|
1754442739
|
A decision maker records measurements of a finite-state Markov chain corrupted by noise. The goal is to decide when the Markov chain hits a specific target state. The decision maker can choose from a finite set of sampling intervals to pick the next time to look at the Markov chain. The aim is to optimize an objective comprising of false alarm, delay cost and cumulative measurement sampling cost. Taking more frequent measurements yields accurate estimates but incurs a higher measurement cost. Making an erroneous decision too soon incurs a false alarm penalty. Waiting too long to declare the target state incurs a delay penalty. What is the optimal sequential strategy for the decision maker? The paper shows that under reasonable conditions, the optimal strategy has the following intuitive structure: when the Bayesian estimate (posterior distribution) of the Markov chain is away from the target state, look less frequently; while if the posterior is close to the target state, look more frequently. Bounds are derived for the optimal strategy. Also the achievable optimal cost of the sequential detector as a function of transition dynamics and observation distribution is analyzed. The sensitivity of the optimal achievable cost to parameter variations is bounded in terms of the Kullback divergence. To prove the results in this paper, novel stochastic dominance results on the Bayesian filtering recursion are derived. The formulation in this paper generalizes quickest time change detection to consider optimal sampling and also yields useful results in sensor scheduling (active sensing).
|
This paper analyzes the structure of the optimal sampling strategy of the decision-maker. The problem is an instance of a partially observed Markov decision process (POMDP) @cite_12 . In general, solving POMDPs and therefore determining the optimal strategy is computationally intractable (PSPACE hard @cite_1 ). However, returning to the example considered above, intuition suggests that the following strategy would be sensible (recall that the action set @math ): The key point is that such a strategy (choice of sampling interval @math ) is monotonically decreasing as the posterior distribution gets closer to the target state. By using stochastic dominance and lattice programming analysis, this paper shows that under reasonable conditions, the optimal sampling strategy always has this monotone structure. Lattice programming was championed by @cite_4 and provides a general set of sufficient conditions for the existence of monotone strategies in stochastic control problems. This area falls under the general umbrella of monotone comparative statics that has witnessed remarkable interest in the area of economics @cite_23 . Our results apply to general observation distributions (Gaussians, exponentials, Markov modulated Poisson, discrete memoryless channels, etc) and multi-state Markov chains.
|
{
"cite_N": [
"@cite_23",
"@cite_4",
"@cite_1",
"@cite_12"
],
"mid": [
"2097801309",
"389907844",
"",
"180325379"
],
"abstract": [
"This paper analyzes monotone comparative statics predictions in several classes of stochastic optimization problems. The main results characterize necessary and sufficient conditions for comparative statics predictions to hold based on properties of primitive functions, that is, utility functions and probability distributions. The results apply when the primitives satisfy one of the following two properties: (i) a single-crossing property, which arises in applications such as portfolio investment problems and auctions, or (ii) log-supermodularity, which arises in the analysis of demand functions, affiliated random variables, stochastic orders, and orders over risk aversion.",
"PrefaceCh. 1Introduction3Ch. 2Lattices, Supermodular Functions, and Related Topics7Ch. 3Optimal Decision Models94Ch. 4Noncooperative Games175Ch. 5Cooperative Games207Bibliography263Index269",
"",
"Automated sequential decision making is crucial in many contexts. In the face of uncertainty, this task becomes even more important, though at the same time, computing optimal decision policies becomes more complex. The more sources of uncertainty there are, the harder the problem becomes to solve. In this work, we look at sequential decision making in environments where the actions have probabilistic outcomes and in which the system state is only partially observable. We focus on using a model called a partially observable Markov decision process (POMDP) and explore algorithms which address computing both optimal and approximate policies for use in controlling processes that are modeled using POMDPs. Although solving for the optimal policy is PSPACE-complete (or worse), the study and improvements of exact algorithms lends insight into the optimal solution structure as well as providing a basis for approximate solutions. We present some improvements, analysis and empirical comparisons for some existing and some novel approaches for computing the optimal POMDP policy exactly. Since it is also hard (NP-complete or worse) to derive close approximations to the optimal solution for POMDPs, we consider a number of approaches for deriving policies that yield sub-optimal control and empirically explore their performance on a range of problems. These approaches borrow and extend ideas from a number of areas; from the more mathematically motivated techniques in reinforcement learning and control theory to entirely heuristic control rules."
]
}
|
1208.3291
|
1754442739
|
A decision maker records measurements of a finite-state Markov chain corrupted by noise. The goal is to decide when the Markov chain hits a specific target state. The decision maker can choose from a finite set of sampling intervals to pick the next time to look at the Markov chain. The aim is to optimize an objective comprising of false alarm, delay cost and cumulative measurement sampling cost. Taking more frequent measurements yields accurate estimates but incurs a higher measurement cost. Making an erroneous decision too soon incurs a false alarm penalty. Waiting too long to declare the target state incurs a delay penalty. What is the optimal sequential strategy for the decision maker? The paper shows that under reasonable conditions, the optimal strategy has the following intuitive structure: when the Bayesian estimate (posterior distribution) of the Markov chain is away from the target state, look less frequently; while if the posterior is close to the target state, look more frequently. Bounds are derived for the optimal strategy. Also the achievable optimal cost of the sequential detector as a function of transition dynamics and observation distribution is analyzed. The sensitivity of the optimal achievable cost to parameter variations is bounded in terms of the Kullback divergence. To prove the results in this paper, novel stochastic dominance results on the Bayesian filtering recursion are derived. The formulation in this paper generalizes quickest time change detection to consider optimal sampling and also yields useful results in sensor scheduling (active sensing).
|
We also refer to the seminal work of Moustakides (see @cite_13 and references therein) in event triggered sampling. Quickest detection has been studied widely, see @cite_21 @cite_6 and references therein. We have considered recently a POMDP approach to quickest detection with social learning @cite_19 and non-linear penalties @cite_3 and phase-distributed change times. However, in these papers, there is only one continue and one stop action. The results in the current paper are considerably more general due to the propagation of different dynamics for the multiple continue actions. A useful feature of the lattice programming approach @cite_5 @cite_9 @cite_15 used in this paper is that the results apply to general observation noise distributions (Gaussians, exponentials, discrete memoryless channels) and multiple state Markov chains. Also, the results proved here are valid for finite sample sizes and no asymptotic approximations in signal to noise ratio are used.
|
{
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_5",
"@cite_15",
"@cite_13"
],
"mid": [
"2044549125",
"",
"2102134363",
"1967128863",
"",
"2032509103",
"2004333862",
"2074709698"
],
"abstract": [
"This paper provides sufficient conditions for the optimal value in a discrete-time, finite, partially observed Markov decision process to be monotone on the space of state probability vectors ordered by likelihood ratios. The paper also presents sufficient conditions for the optimal policy to be monotone in a simple machine replacement problem, and, in the general case, for the optimal policy to be bounded from below by an easily calculated monotone function.",
"",
"We show that the optimal decision policy for several types of Bayesian sequential detection problems has a threshold switching curve structure on the space of posterior distributions. This is established by using lattice programming and stochastic orders in a partially observed Markov decision process (POMDP) framework. A stochastic gradient algorithm is presented to estimate the optimal linear approximation to this threshold curve. We illustrate these results by first considering quickest time detection with phase-type distributed change time and a variance stopping penalty. Then it is proved that the threshold switching curve also arises in several other Bayesian decision problems such as quickest transient detection, exponential delay (risk-sensitive) penalties, stopping time problems in social learning, and multi-agent scheduling in a changing world. Using Blackwell dominance, it is shown that for dynamic decision making problems, the optimal decision policy is lower bounded by a myopic policy. Finally, it is shown how the achievable cost of the optimal decision policy varies with change time distribution by imposing a partial order on transition matrices.",
"The optimal detection procedure for detecting changes in independent and identically distributed (i.i.d.) sequences in a Bayesian setting was derived by Shiryaev in the 1960s. However, the analysis...",
"",
"This paper examines monotonicity results for a fairly general class of partially observable Markov decision processes. When there are only two actual states in the system and when the actions taken are primarily intended to improve the system, rather than to inspect it, we give reasonable conditions which ensure that the optimal reward function and the optimal action are both monotone in the current state of information. Examples of maintenance systems and advertising systems for which our results hold are given. Finally, we examine the case where there are three or more actual states and indicate the difficulties encountered when we attempt to extend the monotonicity results to this situation.",
"A general partially observed control model with discrete time parameter is investigated. Our main interest concerns monotonicity results and bounds for the value functions and for optimal policies. In particular, we show how the value functions depend on the observation kernels and we present conditions for a lower bound of an optimal policy. Our approach is based on two multivariate stochastic orderings: theTP2 ordering and the Blackwell ordering.",
"We propose a new framework for cooperative spectrum sensing in cognitive radio networks, that is based on a novel class of nonuniform samplers, called the event-triggered samplers, and sequential detection. In the proposed scheme, each secondary user (SU) computes its local sensing decision statistic based on its own channel output; and whenever such decision statistic crosses certain predefined threshold values, the secondary user will send one (or several) bit of information to the fusion center (FC). The FC asynchronously receives the bits from different SUs and updates the global sensing decision statistic to perform a sequential probability ratio test (SPRT), to reach a sensing decision. We provide an asymptotic analysis for the above scheme, and under different conditions, we compare it against the cooperative sensing scheme that is based on traditional uniform sampling and sequential detection. Simulation results show that the proposed scheme, using even 1 bit, can outperform its uniform sampling counterpart that uses infinite number of bits under changing target error probabilities, SNR values, and number of SUs."
]
}
|
1208.3206
|
2107050715
|
In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.
|
There has also been extensive work on the visualization of data on tetrahedral grids. Cell-projection methods usually employ the Projected Tetrahedra (PT) algorithm, that decomposes each tetrahedron into a set of triangles and assigns scalar values for the entry and exit points of the viewing rays to each vertex @cite_22 . A GPU-assisted method for decomposing the tetrahedra into triangles using the PT algorithm was presented by Wylie et al. @cite_29 . An artifact-free PT rendering approach using a logarithmically scaled pre-integration table was proposed by Kraus et al. @cite_11 . Maximo et al. developed a hardware-assisted PT approach using CUDA for visibility sorting @cite_3 . GPU-assisted raycasting methods for tetrahedral grids have, for example, been discussed by Weiler et al. @cite_21 and Espinha et al. @cite_6 .
|
{
"cite_N": [
"@cite_22",
"@cite_29",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_11"
],
"mid": [
"2082246999",
"2143666445",
"2141763411",
"",
"2151657122",
"2111871057"
],
"abstract": [
"One method of directly rendering a three-dimensional volume of scalar data is to project each cell in a volume onto the screen. Rasterizing a volume cell is more complex than rasterizing a polygon. A method is presented that approximates tetrahedral volume cells with hardware renderable transparent triangles. This method produces results which are visually similar to more exact methods for scalar volume rendering, but is faster and has smaller memory requirements. The method is best suited for display of smoothly-changing data.",
"Projective methods for volume rendering currently represent the best approach for interactive visualization of unstructured data sets. We present a technique for tetrahedral projection using the programmable vertex shaders on current generation commodity graphics cards. The technique is based on Shirley and Tuchman's Projected Tetrahedra (PT) algorithm and allows tetrahedral elements to be volume scan converted within the graphics processing unit. Our technique requires no pre-processing of the data and no additional data structures. Our initial implementation allows interactive viewing of large unstructured datasets on a desktop personal computer.",
"We present the first implementation of a volume ray casting algorithm for tetrahedral meshes running on off-the-shelf programmable graphics hardware. Our implementation avoids the memory transfer bottleneck of the graphics bus since the complete mesh data is stored in the local memory of the graphics adapter and all computations, in particular ray traversal and ray integration, are performed by the graphics processing unit. Analogously to other ray casting algorithms, our algorithm does not require an expensive cell sorting. Provided that the graphics adapter offers enough texture memory, our implementation performs comparable to the fastest published volume rendering algorithms for unstructured meshes. Our approach works with cyclic and or non-convex meshes and supports early ray termination. Accurate ray integration is guaranteed by applying pre-integrated volume rendering. In order to achieve almost interactive modifications of transfer functions, we propose a new method for computing three-dimensional pre-integration tables.",
"",
"In this paper, we address the problem of the interactive volume rendering of unstructured meshes and propose a new hardware-based ray-casting algorithm using partial pre-integration. The proposed algorithm makes use of modern programmable graphics card and achieves rendering rates competitive with full pre-integration approaches (up to 2M tet sec). This algorithm allows the interactive modification of the transfer function and results in high-quality images, since no artifact due to under-sampling the full numerical pre-integration exists. We also compare our approach with implementations of cell-projection algorithm and demonstrate that ray-casting can perform better than cell projection, because it eliminates the high costs involved in ordering and transferring data.",
"Hardware-accelerated direct volume rendering of unstructured volumetric meshes is often based on tetrahedral cell projection, in particular, the Projected Tetrahedra (PT) algorithm and its variants. Unfortunately, even implementations of the most advanced variants of the PT algorithm are very prone to rendering artifacts. In this work, we identify linear interpolation in screen coordinates as a cause for significant rendering artifacts and implement the correct perspective interpolation for the PT algorithm with programmable graphics hardware. We also demonstrate how to use features of modern graphics hardware to improve the accuracy of the coloring of individual tetrahedra and the compositing of the resulting colors, in particular, by employing a logarithmic scale for the pre-integrated color lookup table, using textures with high color resolution, rendering to floating-point color buffers, and alpha dithering. Combined with a correct visibility ordering, these techniques result in the first implementation of the PT algorithm without objectionable rendering artifacts. Apart from the important improvement in rendering quality, our approach also provides a test bed for different implementations of the PT algorithm that allows us to study the particular rendering artifacts introduced by these variants."
]
}
|
1208.3206
|
2107050715
|
In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.
|
An alternative method to render tetrahedral grids is to resample the data to grid structures that are more directly supported by current graphic hardware architectures. Westermann et al. @cite_7 presented a multi-pass algorithm that resamples tetrahedral meshes onto a cartesian grid by efficiently determining the intersections between planes through the centers of slabs of cells of the target grid using the ST (Shirley-Tuchman)-classification and OpenGL's alpha test to reject fragments outside the intersection regions. Weiler et al. @cite_8 proposed a slice-based resampling technique to a multi-resolution grid. It discards fragments outside the intersection regions between the slice and the tetrahedra based on the barycentric coordinates of each fragment, which are obtained from a texture-lookup.
|
{
"cite_N": [
"@cite_7",
"@cite_8"
],
"mid": [
"2121768666",
"2533408896"
],
"abstract": [
"In this paper we propose a technique for resampling scalar fields given on unstructured tetrahedral grids. This technique takes advantage of hardware accelerated polygon rendering and 2D texture mapping and thus avoids any sorting of the tetrahedral elements. Using this technique, we have built a visualization tool that enables us to either resample the data onto arbitrarily sized Cartesian grids, or to directly render the data on a slice-by-slice basis. Since our approach does not rely on any pre-processing of the data, it can be utilized efficiently for the display of time-dependent unstructured grids where geometry as well as topology change over time.",
"In this paper we address the problem of interactively resampling unstructured grids. Three algorithms are presented. They all allow adaptive resampling of an unstructured grid on a multiresolution hierarchy of arbitrarily sized cartesian grids according to a varying element size. Two of the algorithms presented take advantage of hardware accelerated polygon rendering and 2D texture mapping. In exploiting new features of modern PC graphics adapters, the first algorithm tries to significantly minimize the number of polygons to be rendered. Reducing rasterization requirements is the main goal of the second algorithm, which distributes the computational workload differently between the main processor and the graphics chip. By comparing them to a new pure software approach, an optimal software-hardware balance is studied. We end up with a hybrid approach which greatly improves the performance of hardware-assisted resampling by involving the main processor to a higher degree and thus enabling resampling at nearly interactive rates."
]
}
|
1208.3261
|
1563014059
|
We prove that under certain mild assumptions, the entropy rate of a hidden Markov chain, observed when passing a finite-state stationary Markov chain through a discrete-time continuous-output channel, is jointly analytic as a function of the input Markov chain parameters and the channel parameters. In particular, as consequences of the main theorems, we obtain analyticity for the entropy rate associated with representative channels: Cauchy and Gaussian.
|
Entropy rate for hidden Markov chains is notoriously difficult to compute, even in the case where both input and output alphabets are finite and time is discrete. However, recently much progress has been made in this setting; see, for instance @cite_10 @cite_1 @cite_3 @cite_2 and the references therein.
|
{
"cite_N": [
"@cite_1",
"@cite_10",
"@cite_3",
"@cite_2"
],
"mid": [
"2043591730",
"2152773357",
"",
"2127434124"
],
"abstract": [
"We study the entropy rate of a hidden Markov process (HMP) defined by observing the output of a binary symmetric channel whose input is a first-order binary Markov process. Despite the simplicity of the models involved, the characterization of this entropy is a long standing open problem. By presenting the probability of a sequence under the model as a product of random matrices, one can see that the entropy rate sought is equal to a top Lyapunov exponent of the product. This offers an explanation for the elusiveness of explicit expressions for the HMP entropy rate, as Lyapunov exponents are notoriously difficult to compute. Consequently, we focus on asymptotic estimates, and apply the same product of random matrices to derive an explicit expression for a Taylor approximation of the entropy rate with respect to the parameter of the binary symmetric channel. The accuracy of the approximation is validated against empirical simulation results. We also extend our results to higher-order Markov processes and to Renyi entropies of any order.",
"We study the classical problem of noisy constrained capacity in the case of the binary symmetric channel (BSC), namely, the capacity of a BSC whose inputs are sequences chosen from a constrained set. Motivated by a result of Ordentlich and Weissman in [28], we derive an asymptotic formula (when the noise parameter is small) for the entropy rate of a hidden Markov chain, observed when a Markov chain passes through a BSC. Using this result we establish an asymptotic formula for the capacity of a BSC with input process supported on an irreducible flnite type constraint, as the noise parameter tends to zero. 1. Introduction and Background. Let X;Y be discrete random variables with alphabet X;Y and joint probability mass function pX;Y (x;y) 4 P(X = x;Y = y), x 2 X;y 2 Y (for notational simplicity, we will write p(x;y) rather than pX;Y (x;y), similarly p(x);p(y) rather than pX(x);pY (y), respectively, when it is clear from the context). The entropy H(X) of the discrete random variable X, which measures the level of uncertainty of X, is deflned as (in this paper log is taken to mean the natural logarithm)",
"",
"A recent result presented the expansion for the entropy rate of a hidden Markov process (HMP) as a power series in the noise variable epsi. The coefficients of the expansion around the noiseless (epsi=0) limit were calculated up to 11th order, using a conjecture that relates the entropy rate of an HMP to the entropy of a process of finite length (which is calculated analytically). In this letter, we generalize and prove the conjecture and discuss its theoretical and practical consequences"
]
}
|
1208.3261
|
1563014059
|
We prove that under certain mild assumptions, the entropy rate of a hidden Markov chain, observed when passing a finite-state stationary Markov chain through a discrete-time continuous-output channel, is jointly analytic as a function of the input Markov chain parameters and the channel parameters. In particular, as consequences of the main theorems, we obtain analyticity for the entropy rate associated with representative channels: Cauchy and Gaussian.
|
To the best of our knowledge, the results in this paper, together with those in @cite_0 , are among the first results establishing analyticity of continuous-state hidden Markov chains. Given the interest in the counterpart results for the discrete-state setting, we expect that such results will be of significance in the continuous-state setting as well.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2159877461"
],
"abstract": [
"We prove that under mild positivity assumptions, the entropy rate of a continuous-state hidden Markov chain, observed when passing a finite-state Markov chain through a discrete-time continuous-output channel, is analytic as a function of the transition probabilities of the underlying Markov chain. We further prove that the entropy rate of a continuous-state hidden Markov chain, observed when passing a mixing finite-type constrained Markov chain through a discrete-time Gaussian channel, is smooth as a function of the transition probabilities of the underlying Markov chain."
]
}
|
1208.3252
|
2017807702
|
Most of the conventional models for opinion dynamics mainly account for a fully local influence, where myopic agents decide their actions after they interact with other agents that are adjacent to them. For example, in the case of social interactions, this includes family, friends, and other immediate strong social ties. The model proposed in this paper embodies a global influence as well where by global we mean that each node also observes a sample of the average behavior of the entire population; e.g., in the social example, people observe other people on the streets, subway, and other social venues. We consider the case where nodes have dichotomous states; examples of applications include elections with two major parties, whether or not to adopt a new technology or product, and any yes no opinion such as in voting on a referendum. The dynamics of states on a network with arbitrary degree distribution are studied. For a given initial condition, we find the probability to reach consensus on each state and the expected time reach to consensus. To model mass media, the effect of an exogenous bias on the average orientation of the system is investigated. To do so, we add an external field to the model that favors one of the states over the other. This field interferes with the regular decision process of each node and creates a constant probability to lean towards one of the states. We solve for the average state of the system as a function of time for given initial conditions. Then anti-conformists (stubborn nodes who never revise their states) are added to the network, in an effort to circumvent the external bias. We find necessary conditions on the number of these defiant nodes required to cancel the effect of the external bias. Our analysis is based on a mean field approximation of the agent opinions.
|
There is a growing literature on opinion dynamics and related models, and we review the most relevant related work here. Different opinion dynamics models have different interaction processes and different updating schemes, which are contingent on the specificities of the problem. For example, one can choose random blocks of adjacent nodes who enforce their opinion on neighbors under certain conditions @cite_48 @cite_6 @cite_51 . In the so-called , at each timestep a randomly chosen node copies the state of a randomly chosen neighbor @cite_39 @cite_19 @cite_3 @cite_58 @cite_10 @cite_47 @cite_13 . Conversely, in the , the randomly chosen node imposes its state on a randomly chosen neighbor @cite_49 .
|
{
"cite_N": [
"@cite_13",
"@cite_48",
"@cite_10",
"@cite_6",
"@cite_39",
"@cite_3",
"@cite_19",
"@cite_49",
"@cite_47",
"@cite_58",
"@cite_51"
],
"mid": [
"",
"1982839492",
"",
"1677194921",
"2241340194",
"",
"",
"1494224459",
"",
"",
"2043833398"
],
"abstract": [
"",
"A simple Ising spin model which can describe a mechanism of making a decision in a closed community is proposed. It is shown via standard Monte Carlo simulations that very simple rules lead to rather complicated dynamics and to a power law in the decision time distribution. It is found that a closed community has to evolve either to a dictatorship or a stalemate state (inability to take any common decision). A common decision can be taken in a \"democratic way\" only by an open community.",
"",
"In 2000 we proposed a sociophysics model of opinion formation, which was based on trade union maxim \"United we Stand, Divided we Fall\" (USDF) and latter due to Dietrich Stauffer became known as the Sznajd model (SM). The main difference between SM compared to voter or Ising-type models is that information flows outward. In this paper we review the modifications and applications of SM that have been proposed in the literature.",
"We introduce a two-state opinion dynamics model where agents evolve by majority rule. In each update, a group of agents is specified whose members then all adopt the local majority state. In the mean-field limit, where a group consists of randomly selected agents, consensus is reached in a time that scales @math , where @math is the number of agents. On finite-dimensional lattices, where a group is a contiguous cluster, the consensus time fluctuates strongly between realizations and grows as a dimension-dependent power of @math . The upper critical dimension appears to be larger than 4. The final opinion always equals that of the initial majority except in one dimension.",
"",
"",
"We introduce and study the reverse voter model, a dynamics for spin variables similar to the well‐known voter dynamics. The difference is in the way neighbors influence each other: once a node is selected and one among its neighbors chosen, the neighbor is made equal to the selected node, while in the usual voter dynamics the update goes in the opposite direction. The reverse voter dynamics is studied analytically, showing that on networks with degree distribution decaying as k−v, the time to reach consensus is linear in the system size N for all v > 2. The consensus time for link‐update voter dynamics is computed as well. We verify the results numerically on a class of uncorrelated scale‐free graphs.",
"",
"",
"In this work we consider the influence of mass media in the dynamics of the two-dimensional Sznajd model. This influence acts as an external field, and it is introduced in the model by means of a probability p of the agents to follow the media opinion. We performed Monte Carlo simulations on square lattices with different sizes, and our numerical results suggest a change on the critical behavior of the model, with the absence of the usual phase transition for p>∼0.18. Another effect of the probability p is to decrease the average relaxation times τ, that are log-normally distributed, as in the standard model. In addition, the τ values depend on the lattice size L in a power-law form, τ∼Lα, where the power-law exponent depends on the probability p."
]
}
|
1208.3252
|
2017807702
|
Most of the conventional models for opinion dynamics mainly account for a fully local influence, where myopic agents decide their actions after they interact with other agents that are adjacent to them. For example, in the case of social interactions, this includes family, friends, and other immediate strong social ties. The model proposed in this paper embodies a global influence as well where by global we mean that each node also observes a sample of the average behavior of the entire population; e.g., in the social example, people observe other people on the streets, subway, and other social venues. We consider the case where nodes have dichotomous states; examples of applications include elections with two major parties, whether or not to adopt a new technology or product, and any yes no opinion such as in voting on a referendum. The dynamics of states on a network with arbitrary degree distribution are studied. For a given initial condition, we find the probability to reach consensus on each state and the expected time reach to consensus. To model mass media, the effect of an exogenous bias on the average orientation of the system is investigated. To do so, we add an external field to the model that favors one of the states over the other. This field interferes with the regular decision process of each node and creates a constant probability to lean towards one of the states. We solve for the average state of the system as a function of time for given initial conditions. Then anti-conformists (stubborn nodes who never revise their states) are added to the network, in an effort to circumvent the external bias. We find necessary conditions on the number of these defiant nodes required to cancel the effect of the external bias. Our analysis is based on a mean field approximation of the agent opinions.
|
Learning and trust have been incorporated in the models, where each node has an estimate of how reliable its observations might be @cite_56 @cite_22 @cite_14 @cite_45 @cite_25 @cite_52 . In some models, nodes only interact with neighbors whose opinions are close to their own @cite_17 @cite_33 @cite_21 @cite_46 . @cite_43 , agents have inertia in the sense that the longer they have had a particular state, the less probable it is for them to deviate from it. The reader is also referred to @cite_15 for a broad review.
|
{
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_33",
"@cite_15",
"@cite_21",
"@cite_52",
"@cite_56",
"@cite_43",
"@cite_45",
"@cite_46",
"@cite_25",
"@cite_17"
],
"mid": [
"",
"",
"",
"2058105398",
"",
"",
"2110152576",
"2106427318",
"",
"",
"2089209982",
"2169940073"
],
"abstract": [
"",
"",
"",
"Statistical physics has proven to be a fruitful framework to describe phenomena outside the realm of traditional physics. Recent years have witnessed an attempt by physicists to study collective phenomena emerging from the interactions of individuals as elementary units in social structures. A wide list of topics are reviewed ranging from opinion and cultural and language dynamics to crowd behavior, hierarchy formation, human dynamics, and social spreading. The connections between these problems and other, more traditional, topics of statistical physics are highlighted. Comparison of model results with empirical data from social systems are also emphasized.",
"",
"",
"We study the perfect Bayesian equilibrium of a model of learning over a general social network. Each individual receives a signal about the underlying state of the world, observes the past actions of a stochastically-generated neighborhood of individuals, and chooses one of two possible actions. The stochastic process generating the neighborhoods defines the network topology (social network). The special case where each individual observes all past actions has been widely studied in the literature. We characterize pure-strategy equilibria for arbitrary stochastic and deterministic social networks and characterize the conditions under which there will be asymptotic learning -- that is, the conditions under which, as the social network becomes large, individuals converge (in probability) to taking the right action. We show that when private beliefs are unbounded (meaning that the implied likelihood ratios are unbounded), there will be asymptotic learning as long as there is some minimal amount of \"expansion in observations\". Our main theorem shows that when the probability that each individual observes some other individual from the recent past converges to one as the social network becomes large, unbounded private beliefs are sufficient to ensure asymptotic learning. This theorem therefore establishes that, with unbounded private beliefs, there will be asymptotic learning an almost all reasonable social networks. We also show that for most network topologies, when private beliefs are bounded, there will not be asymptotic learning. In addition, in contrast to the special case where all past actions are observed, asymptotic learning is possible even with bounded beliefs in certain stochastic network topologies.",
"For the voter model, we study the effect of a memory-dependent transition rate. We assume that the transition of a spin into the opposite state decreases with the time it has been in its current state. Counterintuitively, we find that the time to reach a macroscopically ordered state can be accelerated by slowing down the microscopic dynamics in this way. This holds for different network topologies, including fully connected ones. We find that the ordering dynamics is governed by two competing processes which either stabilize the majority or the minority state. If the first one dominates, it accelerates the ordering of the system. The conclusions of this Letter are not restricted to the voter model, but remain valid to many other spin systems as well.",
"",
"",
"We introduce a statistical-physics model for opinion dynamics on random networks where agents adopt the opinion held by the majority of their direct neighbors only if the fraction of these neighbors exceeds a certain threshold, p(u). We find a transition from total final consensus to a mixed phase where opinions coexist amongst the agents. The relevant parameters are the relative sizes in the initial opinion distribution within the population and the connectivity of the underlying network. As the order parameter we de. ne the asymptotic state of opinions. In the phase diagram we find regions of total consensus and a mixed phase. As the \"laggard parameter\" pu increases the regions of consensus shrink. In addition we introduce rewiring of the underlying network during the opinion formation process and discuss the resulting consequences in the phase diagram. Copyright (c) EPLA, 2008.",
"We study opinion dynamics models where agents evolve via repeated pairwise interactions. In the compromise model, agents with sufficiently close real-valued opinions average their opinions. A steady state is reached with a finite number of isolated, noninteracting opinion clusters (“parties”). As the initial opinion range increases, the number of such parties undergoes a periodic bifurcation sequence, with alternating major and minor parties. In the constrained voter model, there are leftists, centrists, and rightists. A centrist and an extremist can both become centrists or extremists in an interaction, while leftists and rightists do not affect each other. The final state is either consensus or a frozen population of leftists and rightists. The evolution in one dimension is mapped onto a constrained spin-1 Ising chain with zero-temperature Glauber kinetics. The approach to the final state exhibits a nonuniversal long-time tail."
]
}
|
1208.3252
|
2017807702
|
Most of the conventional models for opinion dynamics mainly account for a fully local influence, where myopic agents decide their actions after they interact with other agents that are adjacent to them. For example, in the case of social interactions, this includes family, friends, and other immediate strong social ties. The model proposed in this paper embodies a global influence as well where by global we mean that each node also observes a sample of the average behavior of the entire population; e.g., in the social example, people observe other people on the streets, subway, and other social venues. We consider the case where nodes have dichotomous states; examples of applications include elections with two major parties, whether or not to adopt a new technology or product, and any yes no opinion such as in voting on a referendum. The dynamics of states on a network with arbitrary degree distribution are studied. For a given initial condition, we find the probability to reach consensus on each state and the expected time reach to consensus. To model mass media, the effect of an exogenous bias on the average orientation of the system is investigated. To do so, we add an external field to the model that favors one of the states over the other. This field interferes with the regular decision process of each node and creates a constant probability to lean towards one of the states. We solve for the average state of the system as a function of time for given initial conditions. Then anti-conformists (stubborn nodes who never revise their states) are added to the network, in an effort to circumvent the external bias. We find necessary conditions on the number of these defiant nodes required to cancel the effect of the external bias. Our analysis is based on a mean field approximation of the agent opinions.
|
@cite_35 the pure voter model on heterogeneous graphs is solved under the mean-field assumption. Nodes with identical degrees are considered to be indistinguishable in dynamics, and the connection probability of each pair of nodes is proportional to their degrees. The probability to reach consensus on each of the states and the expected time to reach consensus are approximated. @cite_8 , intrinsic flip" rates are heterogeneous, so that some nodes adopt new opinions more frequently than others.
|
{
"cite_N": [
"@cite_35",
"@cite_8"
],
"mid": [
"2150041461",
"2130469453"
],
"abstract": [
"We study the voter model on heterogeneous graphs. We exploit the nonconservation of the magnetization to characterize how consensus is reached. For a network of @math nodes with an arbitrary but uncorrelated degree distribution, the mean time to reach consensus @math scales as @math , where @math is the @math th moment of the degree distribution. For a power-law degree distribution @math , @math thus scales as @math for @math , as @math for @math , as @math for @math , as @math for @math , and as @math for @math . These results agree with simulation data for networks with both uncorrelated and correlated node degrees.",
"We introduce the heterogeneous voter model (HVM), in which each agent has its own intrinsic rate to change state, reflective of the heterogeneity of real people, and the partisan voter model (PVM), in which each agent has an innate and fixed preference for one of two possible opinion states. For the HVM, the time until consensus is reached is much longer than in the classic voter model. For the PVM in the mean-field limit, a population evolves to a preference-based state, where each agent tends to be aligned with its internal preference. For finite populations, discrete fluctuations ultimately lead to consensus being reached in a time that scales exponentially with population size."
]
}
|
1208.2870
|
2953086893
|
An extensive body of research deals with estimating the correlation and the Hurst parameter of Internet traffic traces. The significance of these statistics is due to their fundamental impact on network performance. The coverage of Internet traffic traces is, however, limited since acquiring such traces is challenging with respect to, e.g., confidentiality, logging speed, and storage capacity. In this work, we investigate how the correlation of Internet traffic can be reliably estimated from random traffic samples. These samples are observed either by passive monitoring within the network, or otherwise by active packet probes at end systems. We analyze random sampling processes with different inter-sample distributions and show how to obtain asymptotically unbiased estimates from these samples. We quantify the inherent limitations that are due to limited observations and explore the influence of various parameters, such as sampling intensity, network utilization, or Hurst parameter on the estimation accuracy. We design an active probing method which enables simple and lightweight traffic sampling without support from the network. We verify our approach in a controlled network environment and present comprehensive Internet measurements. We find that the correlation exhibits properties such as long range dependence as well as periodicities and that it differs significantly across Internet paths and observation times.
|
In this work, we focus on the autocovariance structure of . Our goal is to infer from traffic observations, respectively, to estimate the Hurst parameter @math from from the slope of @math on a log-log scale. Numerous other methods exist for estimating the Hurst parameter from LRD and self-similar time series @cite_30 @cite_25 @cite_15 .
|
{
"cite_N": [
"@cite_30",
"@cite_15",
"@cite_25"
],
"mid": [
"2319085245",
"2164214015",
"2165773639"
],
"abstract": [
"",
"A joint estimator is presented for the two parameters that define the long-range dependence phenomenon in the simplest case. The estimator is based on the coefficients of a discrete wavelet decomposition, improving a wavelet-based estimator of the scaling parameter (Abry and Veitch 1998), as well as extending it to include the associated power parameter. An important feature is its conceptual and practical simplicity, consisting essentially in measuring the slope and the intercept of a linear fit after a discrete wavelet transform is performed, a very fast (O(n)) operation. Under well-justified technical idealizations the estimator is shown to be unbiased and of minimum or close to minimum variance for the scale parameter, and asymptotically unbiased and efficient for the second parameter. Through theoretical arguments and numerical simulations it is shown that in practice, even for small data sets, the bias is very small and the variance close to optimal for both parameters. Closed-form expressions are given for the covariance matrix of the estimator as a function of data length, and are shown by simulation to be very accurate even when the technical idealizations are not satisfied. Comparisons are made against two maximum-likelihood estimators. In terms of robustness and computational cost the wavelet estimator is found to be clearly superior and statistically its performance is comparable. We apply the tool to the analysis of Ethernet teletraffic data, completing an earlier study on the scaling parameter alone.",
"Various methods for estimating the self-similarity parameter and or the intensity of long-range dependence in a time series are available. Some are more reliable than others. To discover the ones that work best, we apply the different methods to simulated sequences of fractional Gaussian noise and fractional ARIMA (0, d, 0). We also provide here a theoretical justification for the method of residuals of regression."
]
}
|
1208.2870
|
2953086893
|
An extensive body of research deals with estimating the correlation and the Hurst parameter of Internet traffic traces. The significance of these statistics is due to their fundamental impact on network performance. The coverage of Internet traffic traces is, however, limited since acquiring such traces is challenging with respect to, e.g., confidentiality, logging speed, and storage capacity. In this work, we investigate how the correlation of Internet traffic can be reliably estimated from random traffic samples. These samples are observed either by passive monitoring within the network, or otherwise by active packet probes at end systems. We analyze random sampling processes with different inter-sample distributions and show how to obtain asymptotically unbiased estimates from these samples. We quantify the inherent limitations that are due to limited observations and explore the influence of various parameters, such as sampling intensity, network utilization, or Hurst parameter on the estimation accuracy. We design an active probing method which enables simple and lightweight traffic sampling without support from the network. We verify our approach in a controlled network environment and present comprehensive Internet measurements. We find that the correlation exhibits properties such as long range dependence as well as periodicities and that it differs significantly across Internet paths and observation times.
|
Sampling is widely used to reduce the data processing and storage requirements as well as to circumvent problems, such as system inaccessibility and hardware access latency. A fundamental result often employed in the sampling context is known as PASTA, Poisson Arrivals see Time Averages @cite_42 . PASTA states that the portion of Poisson arrivals that see a system in a certain state corresponds, in average, to the portion of time the system spends in that state.
|
{
"cite_N": [
"@cite_42"
],
"mid": [
"2125569171"
],
"abstract": [
"In many stochastic models, particularly in queueing theory, Poisson arrivals both observe (see) a stochastic process and interact with it. In particular cases and or under restrictive assumptions it has been shown that the fraction of arrivals that see the process in some state is equal to the fraction of time the process is in that state. In this paper, we present a proof of this result under one basic assumption: the process being observed cannot anticipate the future jumps of the Poisson process."
]
}
|
1208.2870
|
2953086893
|
An extensive body of research deals with estimating the correlation and the Hurst parameter of Internet traffic traces. The significance of these statistics is due to their fundamental impact on network performance. The coverage of Internet traffic traces is, however, limited since acquiring such traces is challenging with respect to, e.g., confidentiality, logging speed, and storage capacity. In this work, we investigate how the correlation of Internet traffic can be reliably estimated from random traffic samples. These samples are observed either by passive monitoring within the network, or otherwise by active packet probes at end systems. We analyze random sampling processes with different inter-sample distributions and show how to obtain asymptotically unbiased estimates from these samples. We quantify the inherent limitations that are due to limited observations and explore the influence of various parameters, such as sampling intensity, network utilization, or Hurst parameter on the estimation accuracy. We design an active probing method which enables simple and lightweight traffic sampling without support from the network. We verify our approach in a controlled network environment and present comprehensive Internet measurements. We find that the correlation exhibits properties such as long range dependence as well as periodicities and that it differs significantly across Internet paths and observation times.
|
Further, the authors of @cite_29 establish general conditions, such that Arrivals See Time Averages (ASTA) holds, i.e., bias free estimates are not limited to Poisson sampling. In a recent work the authors of @cite_36 coined the term NIMASTA, i.e. Non-intrusive Mixing Arrivals See Time Averages, in the context of network measurements. Using an argument on joint ergodicity, the authors prove an almost sure convergence of where @math is a sample of the process @math at time @math and @math is a general positive function of @math . The sampling times @math for @math are chosen according to a sampling process. The target metric is specified depending on the chosen function @math . Eq. is satisfied when the process @math is ergodic and the sampling process is mixing @cite_36 . The authors in @cite_0 show that Poisson sampling, though bias free, does not guarantee minimum variance estimates.
|
{
"cite_N": [
"@cite_36",
"@cite_29",
"@cite_0"
],
"mid": [
"2079941176",
"2032237051",
"2158778126"
],
"abstract": [
"Poisson arrivals see time averages (PASTA) is a well-known property applicable to many stochastic systems. In active probing, PASTA is invoked to justify the sending of probe packets (or trains) at Poisson times in a variety of contexts. However, due to the diversity of aims and analysis techniques used in active probing, the benefits of Poisson-based measurement, and the utility and role of PASTA, are unclear. Using a combination of rigorous results and carefully constructed examples and counterexamples, we map out the issues involved and argue that PASTA is of very limited use in active probing. In particular, Poisson probes are not unique in their ability to sample without bias. Furthermore, PASTA ignores the issue of estimation variance and the central need for an inversion phase to estimate the quantity of interest based on what is directly observable. We give concrete examples of when Poisson probes should not be used, explain why, and offer initial guidelines on suitable alternative sending processes.",
"We investigate when Arrivals See Time Averages ASTA in a stochastic model; i.e., when the stationary distribution of an embedded sequence, obtained by observing a continuous-time stochastic process just prior to the points arrivals of an associated point process, coincides with the stationary distribution of the observed process. We also characterize the relation between the two distributions when ASTA does not hold. We introduce a Lack of Bias Assumption LBA which stipulates that, at any time, the conditional intensity of the point process, given the present state of the observed process, be independent of the state of the observed process. We show that LBA, without the Poisson assumption, is necessary and sufficient for ASTA in a stationary process framework. Consequently, LBA covers known examples of non-Poisson ASTA, such as certain flows in open Jackson queueing networks, as well as the familiar Poisson case PASTA. We also establish results to cover the case in which the process is observed just after the points, e.g., when departures see time averages. Finally, we obtain a new proof of the Arrival Theorem for product-form queueing networks.",
"Packet delay and loss are two fundamental measures of performance. Using active probing to measure delay and loss typically involves sending Poisson probes, on the basis of the PASTA property (Poisson Arrivals See Time Averages), which ensures that Poisson probing yields unbiased estimates. Recent work, however, has questioned the utility of PASTA for probing and shown that, for delay measurements, i) a wide variety of processes other than Poisson can be used to probe with zero bias and ii) Poisson probing does not necessarily minimize the variance of delay estimates. In this paper, we determine optimal probing processes that minimize the mean-square error of measurement estimates for both delay and loss. Our contributions are twofold. First, we show that a family of probing processes, specifically Gamma renewal probing processes, has optimal properties in terms of bias and variance. The optimality result is general, and only assumes that the target process we seek to optimally measure via probing, such as a loss or delay process, has a convex auto-covariance function. Second, we use empirical datasets to demonstrate the applicability of our results in practice, specifically to show that the convexity condition holds true and that Gamma probing is indeed superior to Poisson probing. Together, these results lead to explicit guidelines on designing the best probe streams for both delay and loss estimation."
]
}
|
1208.2870
|
2953086893
|
An extensive body of research deals with estimating the correlation and the Hurst parameter of Internet traffic traces. The significance of these statistics is due to their fundamental impact on network performance. The coverage of Internet traffic traces is, however, limited since acquiring such traces is challenging with respect to, e.g., confidentiality, logging speed, and storage capacity. In this work, we investigate how the correlation of Internet traffic can be reliably estimated from random traffic samples. These samples are observed either by passive monitoring within the network, or otherwise by active packet probes at end systems. We analyze random sampling processes with different inter-sample distributions and show how to obtain asymptotically unbiased estimates from these samples. We quantify the inherent limitations that are due to limited observations and explore the influence of various parameters, such as sampling intensity, network utilization, or Hurst parameter on the estimation accuracy. We design an active probing method which enables simple and lightweight traffic sampling without support from the network. We verify our approach in a controlled network environment and present comprehensive Internet measurements. We find that the correlation exhibits properties such as long range dependence as well as periodicities and that it differs significantly across Internet paths and observation times.
|
A comparison of Poisson and periodic sampling was carried out in @cite_18 @cite_19 . In @cite_18 the authors show experimentally, that the differences between round trip times (RTT), loss rate and packet pair dispersion estimates, obtained by either Poisson or periodic probing, are in some cases not significant. Depending on the autocovariance of the sampled process, Poisson or periodic sampling can be superior. This is shown in @cite_19 using the metric asymptotic variance.
|
{
"cite_N": [
"@cite_19",
"@cite_18"
],
"mid": [
"2098214968",
"1983670906"
],
"abstract": [
"Active probes of network performance represent samples of the underlying performance of a system. Some effort has gone into considering appropriate sampling patterns for such probes, i.e., there has been significant discussion of the importance of sampling using a Poisson process to avoid biases introduced by synchronization of system and measurements. However, there are unanswered questions about whether Poisson probing has costs in terms of sampling efficiency, and there is some misinformation about what types of inferences are possible with different probe patterns. This paper provides a quantitative comparison of two different sampling methods. This paper also shows that the irregularity in probing patterns is useful not just in avoiding synchronization, but also in determining frequency-domain properties of a system. This paper provides a firm basis for practitioners or researchers for making decisions about the type of sampling they should use in a particular applications, along with methods for the analysis of their outputs",
"The well-known PASTA (\"Poisson Arrivals See Time Averages\") property states that, under very general conditions, the fraction of Poisson arrivals that observe an underlying process in a particular state is equal, asymptotically, to the fraction of time the process spends in that state. When applied to network inference, PASTA implies that a Poisson probing stream provides an unbiased estimate of the desired time average. Our objective is to examine the practical significance of the PASTA property in the context of realistic RTT, loss rate and packet pair dispersion measurements with a finite (but not small) number of samples. In particular, we first evaluate the differences between the point estimates (median RTT, loss rate, and median dispersion) that result from Poisson and Periodic probing. Our evaluation is based on a rich set of measurements between 23 PlanetLab hosts. The experimental results show that in almost all measurement sessions the differences between the Poisson and Periodic point estimates are insignificant. In the case of RTT and dispersion measurements, we also used a non-parametric goodness-of-fit test, based on the Kullback-Leibler distance, to evaluate the similarity of the distributions that result from Poisson and Periodic probing. The results show that in more than 90 of the measurements there is no statistically significant difference between the two distributions."
]
}
|
1208.2870
|
2953086893
|
An extensive body of research deals with estimating the correlation and the Hurst parameter of Internet traffic traces. The significance of these statistics is due to their fundamental impact on network performance. The coverage of Internet traffic traces is, however, limited since acquiring such traces is challenging with respect to, e.g., confidentiality, logging speed, and storage capacity. In this work, we investigate how the correlation of Internet traffic can be reliably estimated from random traffic samples. These samples are observed either by passive monitoring within the network, or otherwise by active packet probes at end systems. We analyze random sampling processes with different inter-sample distributions and show how to obtain asymptotically unbiased estimates from these samples. We quantify the inherent limitations that are due to limited observations and explore the influence of various parameters, such as sampling intensity, network utilization, or Hurst parameter on the estimation accuracy. We design an active probing method which enables simple and lightweight traffic sampling without support from the network. We verify our approach in a controlled network environment and present comprehensive Internet measurements. We find that the correlation exhibits properties such as long range dependence as well as periodicities and that it differs significantly across Internet paths and observation times.
|
In @cite_4 it is shown that for correlation lags tending to infinity, random sampling captures the long memory of the original processes, as long as the sampling distribution has a finite mean.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2015838452"
],
"abstract": [
"This paper investigates the second order properties of a stationary process after random sampling. While a short memory process gives always rise to a short memory one, we prove that long-memory can disappear when the sampling law has heavy enough tails. We prove that under rather general conditions the existence of the spectral density is preserved by random sampling. We also investigate the effects of deterministic sampling on seasonal long-memory."
]
}
|
1208.2870
|
2953086893
|
An extensive body of research deals with estimating the correlation and the Hurst parameter of Internet traffic traces. The significance of these statistics is due to their fundamental impact on network performance. The coverage of Internet traffic traces is, however, limited since acquiring such traces is challenging with respect to, e.g., confidentiality, logging speed, and storage capacity. In this work, we investigate how the correlation of Internet traffic can be reliably estimated from random traffic samples. These samples are observed either by passive monitoring within the network, or otherwise by active packet probes at end systems. We analyze random sampling processes with different inter-sample distributions and show how to obtain asymptotically unbiased estimates from these samples. We quantify the inherent limitations that are due to limited observations and explore the influence of various parameters, such as sampling intensity, network utilization, or Hurst parameter on the estimation accuracy. We design an active probing method which enables simple and lightweight traffic sampling without support from the network. We verify our approach in a controlled network environment and present comprehensive Internet measurements. We find that the correlation exhibits properties such as long range dependence as well as periodicities and that it differs significantly across Internet paths and observation times.
|
The injection of test packets into a network for inferring network performance, i.e., active probing, has attracted considerable attention in recent years. End-to-end packet delays or inter packet times are metrics commonly used to estimate network characteristics such as the average available bandwidth or even to reconstruct statistics of the cross-traffic @cite_3 @cite_23 @cite_37 @cite_1 . Under the assumption of FIFO scheduling, cross traffic intensity can be estimated from the dispersion of back-to-back probing packets @cite_26 @cite_23 @cite_12 @cite_32 .
|
{
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_23",
"@cite_12"
],
"mid": [
"2161630099",
"2139684699",
"2133949702",
"2169610789",
"132742350",
"2887158552",
"1690775272"
],
"abstract": [
"The available bandwidth (avail-bw) in a network path is of major importance in congestion control, streaming applications, quality-of-service verification, server selection, and overlay networks. We describe an end-to-end methodology, called self-loading periodic streams (SLoPS), for measuring avail-bw. The basic idea in SLoPS is that the one-way delays of a periodic packet stream show an increasing trend when the stream's rate is higher than the avail-bw. We have implemented SLoPS in a tool called pathload. The accuracy of the tool has been evaluated with both simulations and experiments over real-world Internet paths. Pathload is nonintrusive, meaning that it does not cause significant increases in the network utilization, delays, or losses. We used pathload to evaluate the variability (\"dynamics\") of the avail-bw in Internet paths. The avail-bw becomes significantly more variable in heavily utilized paths, as well as in paths with limited capacity (probably due to a lower degree of statistical multiplexing). We finally examine the relation between avail-bw and TCP throughput. A persistent TCP connection can be used to measure roughly the avail-bw in a path, but TCP saturates the path and increases significantly the path delays and jitter.",
"The packet pair technique estimates the capacity of a path (bottleneck bandwidth) from the dispersion (spacing) experienced by two back-to-back packets. We demonstrate that the dispersion of packet pairs in loaded paths follows a multimodal distribution, and discuss the queueing effects that cause the multiple modes. We show that the path capacity is often not the global mode, and so it cannot be estimated using standard statistical procedures. The effect of the size of the probing packets is also investigated, showing that the conventional wisdom of using maximum sized packet pairs is not optimal. We then study the dispersion of long packet trains. Increasing the length of the packet train reduces the measurement variance, but the estimates converge to a value, referred to as the asymptotic dispersion rate (ADR), that is lower than the capacity. We derive the effect of the cross traffic in the dispersion of long packet trains, showing that the ADR is not the available bandwidth in the path, as was assumed in previous work. Putting all the pieces together, we present a capacity estimation methodology that has been implemented in a tool called pathrate.",
"In this paper, we determine which non-random sampling of fixe d size gives the best linear predictor of the sum of a finite spatial p opulation. We employ different multiscale superpopulation models and use the minimum mean-squared error as our optimality criterion. In a multiscale superpopulation tree models, the leaves represent the units of the pop ulation, interior nodes represent partial sums of the population, and the root node represents the total sum of the population. We prove that the optimal sampling pattern varies dramatically with the correlation structure of the t ree nodes. While uniform sampling is optimal for trees with “positive correlation progressio n”, it provides the worst possible sampling with “negative correlation progression.” As an analysis tool, we introduce and study a class of independent innovations trees that are of interest in their own right. We derive a fast water -filling algorithm to determine the optimal sampling of the leaves to estimate the root of an independent innovations tree.",
"Most existing available-bandwidth measurement techniques are justified using a constant-rate fluid cross-traffic model. To achieve a better understanding of the performance of current bandwidth measurement techniques in general traffic conditions, this paper presents a queueing-theoretic foundation of single-hop packet-train bandwidth estimation under bursty arrivals of discrete cross-traffic packets. We analyze the statistical mean of the packet-train output dispersion and its mathematical relationship to the input dispersion, which we call the probing-response curve. This analysis allows us to prove that the single-hop response curve in bursty cross-traffic deviates from that obtained under fluid cross traffic of the same average intensity and to demonstrate that this may lead to significant measurement bias in certain estimation techniques based on fluid models. We conclude the paper by showing, both analytically and experimentally, that the response-curve deviation vanishes as the packet-train length or probing packet size increases, where the vanishing rate is decided by the burstiness of cross-traffic.",
"",
"",
"Although packet-pair probing has been used as one of the primary mechanisms to measure bottleneck capacity, cross-traffic intensity, and available bandwidth of end-to-end Internet paths, there is still no conclusive answer as to what information about the path is contained in the output packet-pair dispersions and how it is encoded. In this paper, we address this issue by deriving closed-form expression of packet-pair dispersion in the context of a single-hop path and general bursty cross-traffic arrival. Under the assumptions of cross-traffic stationarity and ASTA sampling, we examine the statistical properties of the information encoded in inter-packet spacings and derive the asymptotic average of the output packet-pair dispersions as a closed-form function of the input dispersion. We show that this result is different from what was obtained in prior work using fluid cross-traffic models and that this discrepancy has a significant impact on the accuracy of packet-pair bandwidth estimation."
]
}
|
1208.2870
|
2953086893
|
An extensive body of research deals with estimating the correlation and the Hurst parameter of Internet traffic traces. The significance of these statistics is due to their fundamental impact on network performance. The coverage of Internet traffic traces is, however, limited since acquiring such traces is challenging with respect to, e.g., confidentiality, logging speed, and storage capacity. In this work, we investigate how the correlation of Internet traffic can be reliably estimated from random traffic samples. These samples are observed either by passive monitoring within the network, or otherwise by active packet probes at end systems. We analyze random sampling processes with different inter-sample distributions and show how to obtain asymptotically unbiased estimates from these samples. We quantify the inherent limitations that are due to limited observations and explore the influence of various parameters, such as sampling intensity, network utilization, or Hurst parameter on the estimation accuracy. We design an active probing method which enables simple and lightweight traffic sampling without support from the network. We verify our approach in a controlled network environment and present comprehensive Internet measurements. We find that the correlation exhibits properties such as long range dependence as well as periodicities and that it differs significantly across Internet paths and observation times.
|
Two important aspects concerning network probing are the measurement intrusiveness and the interaction of probes with the measured system. The first aspect is usually addressed by minimizing the probing rate while controlling the quality of the results. The second aspect is more involved, since the probes perturb the system leading to distorted observations. For example, measuring queueing delays of probes to determine the true queue length distribution is governed by a type of Heisenberg uncertainty @cite_24 , since the probes alter the queue length. The authors describe the impact of the probing intensity on the accuracy of the result using the notion of asymptotic variance. The effect is increased in case of LRD traffic, although not given in closed form, leading to higher uncertainty in the estimated waiting time @cite_24 .
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"2042944056"
],
"abstract": [
"This paper considers the basic problem of \"how accurate can we make Internet performance measurements\". The answer is somewhat counter-intuitive in that there are bounds on the accuracy of such measurements, no matter how many probes we can use in a given time interval, and thus arises a type of Heisenberg inequality describing the bounds in our knowledge of the performance of a network. The results stem from the fact that we cannot make independent measurements of a system's performance: all such measures are correlated, and these correlations reduce the efficacy of measurements. The degree of correlation is also strongly dependent on system load. The result has important practical implications that reach beyond the design of Internet measurement experiments, into the design of network protocols."
]
}
|
1208.2294
|
1519426874
|
We prove that any submodular function f: 0,1 ^n -> 0,1,...,k can be represented as a pseudo-Boolean 2k-DNF formula. Pseudo-Boolean DNFs are a natural generalization of DNF representation for functions with integer range. Each term in such a formula has an associated integral constant. We show that an analog of Hastad's switching lemma holds for pseudo-Boolean k-DNFs if all constants associated with the terms of the formula are bounded. This allows us to generalize Mansour's PAC-learning algorithm for k-DNFs to pseudo-Boolean k-DNFs, and hence gives a PAC-learning algorithm with membership queries under the uniform distribution for submodular functions of the form f: 0,1 ^n -> 0,1,...,k . Our algorithm runs in time polynomial in n, k^ O(k k ) , 1 and log(1 ) and works even in the agnostic setting. The line of previous work on learning submodular functions [Balcan, Harvey (STOC '11), Gupta, Hardt, Roth, Ullman (STOC '11), Cheraghchi, Klivans, Kothari, Lee (SODA '12)] implies only n^ O(k) query complexity for learning submodular functions in this setting, for fixed epsilon and delta. Our learning algorithm implies a property tester for submodularity of functions f: 0,1 ^n -> 0, ..., k with query complexity polynomial in n for k=O(( n n)^ 1 2 ) and constant proximity parameter .
|
For the special case of Boolean functions, characterizations of submodular and monotone submodular functions in terms of simple DNF formulas are known. A Boolean function is monotone submodular if and only if it can be represented as a monotone 1-DNF (see, e.g., Appendix A in @cite_25 ). A Boolean function is submodular if and only if it has a pure (without singleton terms) 2-DNF representation @cite_7 .
|
{
"cite_N": [
"@cite_25",
"@cite_7"
],
"mid": [
"340337472",
"2080957551"
],
"abstract": [
"Submodular functions are discrete functions that model laws of diminishing returns and enjoy numerous algorithmic applications that have been used in many areas, including combinatorial optimization, machine learning, and economics. In this work we use a learning theoretic angle for studying submodular functions. We provide algorithms for learning submodular functions, as well as lower bounds on their learnability. In doing so, we uncover several novel structural results revealing both extremal properties as well as regularities of submodular functions, of interest to many areas.",
"Abstract After providing a simple characterization of Horn functions (i.e., those Boolean functions that have a Horn DNF), we study in detail the special class of submodular functions. Every prime implicant of such a function involves at most one complemented and at most one uncomplemented variable, and based on this we give a one-to-one correspondence between submodular functions and partial preorders (reflexive and transitive binary relations), and in particular between the nondegenerate acyclic submodular functions and the partially ordered sets. There is a one-to-one correspondence between the roots of a submodular function and the ideals of the associated partial preorder. There is also a one-to-one correspondence between the prime implicants of the dual of the submodular function and the maximal antichains of the associated partial preorder. Based on these results, we give graph-theoretic characterizations for all minimum prime DNF representations of a submodular function. The problem of recognizing submodular functions in DNF representation is coNP-complete."
]
}
|
1208.2294
|
1519426874
|
We prove that any submodular function f: 0,1 ^n -> 0,1,...,k can be represented as a pseudo-Boolean 2k-DNF formula. Pseudo-Boolean DNFs are a natural generalization of DNF representation for functions with integer range. Each term in such a formula has an associated integral constant. We show that an analog of Hastad's switching lemma holds for pseudo-Boolean k-DNFs if all constants associated with the terms of the formula are bounded. This allows us to generalize Mansour's PAC-learning algorithm for k-DNFs to pseudo-Boolean k-DNFs, and hence gives a PAC-learning algorithm with membership queries under the uniform distribution for submodular functions of the form f: 0,1 ^n -> 0,1,...,k . Our algorithm runs in time polynomial in n, k^ O(k k ) , 1 and log(1 ) and works even in the agnostic setting. The line of previous work on learning submodular functions [Balcan, Harvey (STOC '11), Gupta, Hardt, Roth, Ullman (STOC '11), Cheraghchi, Klivans, Kothari, Lee (SODA '12)] implies only n^ O(k) query complexity for learning submodular functions in this setting, for fixed epsilon and delta. Our learning algorithm implies a property tester for submodularity of functions f: 0,1 ^n -> 0, ..., k with query complexity polynomial in n for k=O(( n n)^ 1 2 ) and constant proximity parameter .
|
Gupta al @cite_5 design an algorithm that learns a submodular function with the range @math within a given additive error @math on all but @math fraction of the probability mass (according to a specified product distribution on the domain). Their algorithm requires membership queries, but works even when these queries are answered with additive error @math . It takes @math time.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2121372565"
],
"abstract": [
"Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? We show that the number of statistical queries necessary and sufficient for this task is---up to polynomial factors---equal to the agnostic learning complexity of C in Kearns' statistical query (SQ)model. This gives a complete answer to the question when running time is not a concern. We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to an algorithm that efficiently releases differentially private answers to all Boolean conjunctions with 1 average error. This presents progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms."
]
}
|
1208.2294
|
1519426874
|
We prove that any submodular function f: 0,1 ^n -> 0,1,...,k can be represented as a pseudo-Boolean 2k-DNF formula. Pseudo-Boolean DNFs are a natural generalization of DNF representation for functions with integer range. Each term in such a formula has an associated integral constant. We show that an analog of Hastad's switching lemma holds for pseudo-Boolean k-DNFs if all constants associated with the terms of the formula are bounded. This allows us to generalize Mansour's PAC-learning algorithm for k-DNFs to pseudo-Boolean k-DNFs, and hence gives a PAC-learning algorithm with membership queries under the uniform distribution for submodular functions of the form f: 0,1 ^n -> 0,1,...,k . Our algorithm runs in time polynomial in n, k^ O(k k ) , 1 and log(1 ) and works even in the agnostic setting. The line of previous work on learning submodular functions [Balcan, Harvey (STOC '11), Gupta, Hardt, Roth, Ullman (STOC '11), Cheraghchi, Klivans, Kothari, Lee (SODA '12)] implies only n^ O(k) query complexity for learning submodular functions in this setting, for fixed epsilon and delta. Our learning algorithm implies a property tester for submodularity of functions f: 0,1 ^n -> 0, ..., k with query complexity polynomial in n for k=O(( n n)^ 1 2 ) and constant proximity parameter .
|
Cheraghchi al @cite_23 also work with additive error . Their learner is agnostic and only uses statistical queries . It produces a hypothesis which (with probability at least @math ) has the expected additive error @math with respect to a product distribution , where @math is the error of the best concept in the class. Their algorithm runs in time polynomial in @math and @math .
|
{
"cite_N": [
"@cite_23"
],
"mid": [
"1755884204"
],
"abstract": [
"We show that all non-negative submodular functions have high noise-stability . As a consequence, we obtain a polynomial-time learning algorithm for this class with respect to any product distribution on @math (for any constant accuracy parameter @math ). Our algorithm also succeeds in the agnostic setting. Previous work on learning submodular functions required either query access or strong assumptions about the types of submodular functions to be learned (and did not hold in the agnostic setting)."
]
}
|
1208.2294
|
1519426874
|
We prove that any submodular function f: 0,1 ^n -> 0,1,...,k can be represented as a pseudo-Boolean 2k-DNF formula. Pseudo-Boolean DNFs are a natural generalization of DNF representation for functions with integer range. Each term in such a formula has an associated integral constant. We show that an analog of Hastad's switching lemma holds for pseudo-Boolean k-DNFs if all constants associated with the terms of the formula are bounded. This allows us to generalize Mansour's PAC-learning algorithm for k-DNFs to pseudo-Boolean k-DNFs, and hence gives a PAC-learning algorithm with membership queries under the uniform distribution for submodular functions of the form f: 0,1 ^n -> 0,1,...,k . Our algorithm runs in time polynomial in n, k^ O(k k ) , 1 and log(1 ) and works even in the agnostic setting. The line of previous work on learning submodular functions [Balcan, Harvey (STOC '11), Gupta, Hardt, Roth, Ullman (STOC '11), Cheraghchi, Klivans, Kothari, Lee (SODA '12)] implies only n^ O(k) query complexity for learning submodular functions in this setting, for fixed epsilon and delta. Our learning algorithm implies a property tester for submodularity of functions f: 0,1 ^n -> 0, ..., k with query complexity polynomial in n for k=O(( n n)^ 1 2 ) and constant proximity parameter .
|
Observe that the results in @cite_5 and @cite_23 directly imply an @math time algorithm for our setting, by rescaling our input function to be in @math and setting the error @math The techniques in @cite_5 also imply @math time complexity for non-agnostically Check this; I am not sure how to get this claimed dependence learning submodular functions in this setting, for fixed @math and @math . To the best of our knowledge, this is the best dependence on @math , one can obtain from previous work.
|
{
"cite_N": [
"@cite_5",
"@cite_23"
],
"mid": [
"2121372565",
"1755884204"
],
"abstract": [
"Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? We show that the number of statistical queries necessary and sufficient for this task is---up to polynomial factors---equal to the agnostic learning complexity of C in Kearns' statistical query (SQ)model. This gives a complete answer to the question when running time is not a concern. We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to an algorithm that efficiently releases differentially private answers to all Boolean conjunctions with 1 average error. This presents progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms.",
"We show that all non-negative submodular functions have high noise-stability . As a consequence, we obtain a polynomial-time learning algorithm for this class with respect to any product distribution on @math (for any constant accuracy parameter @math ). Our algorithm also succeeds in the agnostic setting. Previous work on learning submodular functions required either query access or strong assumptions about the types of submodular functions to be learned (and did not hold in the agnostic setting)."
]
}
|
1208.2294
|
1519426874
|
We prove that any submodular function f: 0,1 ^n -> 0,1,...,k can be represented as a pseudo-Boolean 2k-DNF formula. Pseudo-Boolean DNFs are a natural generalization of DNF representation for functions with integer range. Each term in such a formula has an associated integral constant. We show that an analog of Hastad's switching lemma holds for pseudo-Boolean k-DNFs if all constants associated with the terms of the formula are bounded. This allows us to generalize Mansour's PAC-learning algorithm for k-DNFs to pseudo-Boolean k-DNFs, and hence gives a PAC-learning algorithm with membership queries under the uniform distribution for submodular functions of the form f: 0,1 ^n -> 0,1,...,k . Our algorithm runs in time polynomial in n, k^ O(k k ) , 1 and log(1 ) and works even in the agnostic setting. The line of previous work on learning submodular functions [Balcan, Harvey (STOC '11), Gupta, Hardt, Roth, Ullman (STOC '11), Cheraghchi, Klivans, Kothari, Lee (SODA '12)] implies only n^ O(k) query complexity for learning submodular functions in this setting, for fixed epsilon and delta. Our learning algorithm implies a property tester for submodularity of functions f: 0,1 ^n -> 0, ..., k with query complexity polynomial in n for k=O(( n n)^ 1 2 ) and constant proximity parameter .
|
The study of submodularity in the context of property testing was initiated by Parnas, Ron and Rubinfeld @cite_0 . Seshadhri and Vondrak @cite_22 gave the first sublinear (in the size of the domain) tester for submodularity of set functions. Their tester works for all ranges and has query and time complexity @math . They also showed a reduction from testing monotonicity to testing submodularity which, together with a lower bound for testing monotonicity given by Blais, Brody and Matulef @cite_10 , implies a lower bound of @math on the query complexity of testing submodularity for an arbitrary range and constant @math .
|
{
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_22"
],
"mid": [
"1967290069",
"2100732817",
"1801423597"
],
"abstract": [
"Convex and submodular functions play an important role in many applications, and in particular in combinatorial optimization. Here we study two special cases: convexity in one dimension and submodularity in two dimensions. The latter type of functions are equivalent to the well-known Monge matrices. A matrix @math is called a Monge matrix if for every @math and @math we have @math . If inequality holds in the opposite direction, then V is an inverse Monge matrix (supermodular function). Many problems, such as the traveling salesperson problem and various transportation problems, can be solved more efficiently if the input is a Monge matrix. In this work we present testing algorithms for the above properties. A testing algorithm for a predetermined property @math is given query access to an unknown function f and a distance parameter @math . The algorithm should accept f with high probability if it has the property @math and reject it with high probability if more than an @math -fraction of the function values should be modified so that f obtains the property. Our algorithm for testing whether a 1-dimensional function @math is convex (concave) has query complexity and running time of @math . Our algorithm for testing whether an n1 × n2 matrix V is a Monge (inverse Monge) matrix has query complexity and running time of @math .",
"We develop a new technique for proving lower bounds in property testing, by showing a strong connection between testing and communication complexity. We give a simple scheme for reducing communication problems to testing problems, thus allowing us to use known lower bounds in communication complexity to prove lower bounds in testing. This scheme is general and implies a number of new testing bounds, as well as simpler proofs of several known bounds. For the problem of testing whether a boolean function is k-linear (a parity function on k variables), we achieve a lower bound of Omega(k) queries, even for adaptive algorithms with two-sided error, thus confirming a conjecture of Goldreich (2010). The same argument behind this lower bound also implies a new proof of known lower bounds for testing related classes such as k-juntas. For some classes, such as the class of monotone functions and the class of s-sparse GF(2) polynomials, we significantly strengthen the best known bounds.",
"We initiate the study of property testing of submodularity on the boolean hypercube. Submodular functions come up in a variety of applications in combinatorial optimization. For a vast range of algorithms, the existence of an oracle to a submodular function is assumed. But how does one check if this oracle indeed represents a submodular function?"
]
}
|
1208.1829
|
11937046
|
In this paper, we attempts to learn a single metric across two heterogeneous do-mains where source domain is fully labeled and has many samples while targetdomain has only a few labeled samples but abundant unlabeled samples. Tothe best of our knowledge, this task is seldom touched. The proposed learningmodel has a simple underlying motivation: all the samples in both the sourceand the target domains are mapped into a common space, where both theirpriors P(sample)s and their posteriors P(label|sample)s are forced to be re-spectively aligned as much as possible. We show that the two mappings, fromboth the source domain and the target domain to the common space, can bereparameterized into a single positive semi-definite(PSD) matrix. Then we de-velop an efficient Bregman Projection algorithm to optimize the PDS matrixover which a LogDet function is used to regularize. Furthermore, we also showthat this model can be easily kernelized and verify its effectiveness in cross-language retrieval task and cross-domain object recognition task.
|
Heterogeneous learning may date back to @cite_0 . They used some co-occurrence data to estimate the feature-level conditional distribution from source feature to target feature. Later, many other methods were proposed @cite_2 @cite_7 @cite_3 @cite_19 @cite_21 . A common character of these methods is that they all map the samples in the source and the target domains into a common space for the learning tasks. For example, @cite_3 embedded all the samples in different domains into a common space according to a large manifold structure covering both the within-domain geometrical structure and the between-domain label structure. @cite_2 mapped all the samples into a common space and applied the classic linear discriminant analysis(LDA). @cite_19 used a collective matrix factorization model to find out the common space. However, the algorithm requires the same number of samples of source and target domains, which is usually could not be satisfied. Thus before conducting the algorithm, they had to bring in a sampling procedure. @cite_21 constructed a parameterized augmented space as the common space motivated by a domain adaptation method proposed by did @cite_17 . And the parameters are learned through optimizing a large margin classification model.
|
{
"cite_N": [
"@cite_7",
"@cite_21",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_17"
],
"mid": [
"2133909527",
"1834646128",
"46086471",
"2156940638",
"2124961556",
"118019982",
"2120354757"
],
"abstract": [
"Traditional ranking mainly focuses on one type of data source, and effective modeling still relies on a sufficiently large number of labeled or supervised examples. However, in many real-world applications, in particular with the rapid growth of the Web 2.0, ranking over multiple interrelated (heterogeneous) domains becomes a common situation, where in some domains we may have a large amount of training data while in some other domains we can only collect very little. One important question is: \"if there is not sufficient supervision in the domain of interest, how could one borrow labeled information from a related but heterogenous domain to build an accurate model?\". This paper explores such an approach by bridging two heterogeneous domains via the latent space. We propose a regularized framework to simultaneously minimize two loss functions corresponding to two related but different information sources, by mapping each domain onto a \"shared latent space\", capturing similar and transferable oncepts. We solve this problem by optimizing the convex upper bound of the non-continuous loss function and derive its generalization bound. Experimental results on three different genres of data sets demonstrate the effectiveness of the proposed approach.",
"We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods.",
"We propose a manifold alignment based approach for heterogeneous domain adaptation. A key aspect of this approach is to construct mappings to link different feature spaces in order to transfer knowledge across domains. The new approach can reuse labeled data from multiple source domains in a target domain even in the case when the input domains do not share any common features or instances. As a pre-processing step, our approach can also be combined with existing domain adaptation approaches to learn a common feature space for all input domains. This paper extends existing manifold alignment approaches by making use of labels rather than correspondences to align the manifolds. This extension significantly broadens the application scope of manifold alignment, since the correspondence relationship required by existing alignment approaches is hard to obtain in many applications.",
"This paper investigates a new machine learning strategy called translated learning. Unlike many previous learning tasks, we focus on how to use labeled data from one feature space to enhance the classification of other entirely different learning spaces. For example, we might wish to use labeled text data to help learn a model for classifying image data, when the labeled images are difficult to obtain. An important aspect of translated learning is to build a \"bridge\" to link one feature space (known as the \"source space\") to another space (known as the \"target space\") through a translator in order to migrate the knowledge from source to target. The translated learning solution uses a language model to link the class labels to the features in the source spaces, which in turn is translated to the features in the target spaces. Finally, this chain of linkages is completed by tracing back to the instances in the target spaces. We show that this path of linkage can be modeled using a Markov chain and risk minimization. Through experiments on the text-aided image classification and cross-language classification tasks, we demonstrate that our translated learning framework can greatly outperform many state-of-the-art baseline methods.",
"Labeled examples are often expensive and time-consuming to obtain. One practically important problem is: can the labeled data from other related sources help predict the target task, even if they have (a) different feature spaces (e.g., image vs. text data), (b) different data distributions, and (c) different output spaces? This paper proposes a solution and discusses the conditions where this is possible and highly likely to produce better results. It works by first using spectral embedding to unify the different feature spaces of the target and source data sets, even when they have completely different feature spaces. The principle is to cast into an optimization objective that preserves the original structure of the data, while at the same time, maximizes the similarity between the two. Second, a judicious sample selection strategy is applied to select only those related source examples. At last, a Bayesian-based approach is applied to model the relationship between different output spaces. The three steps can bridge related heterogeneous sources in order to learn the target task. Among the 12 experiment data sets, for example, the images with wavelet-transformed-based features are used to predict another set of images whose features are constructed from color-histogram space. By using these extracted examples from heterogeneous sources, the models can reduce the error rate by as much as 50 , compared with the methods using only the examples from the target task.",
"Multi-task learning aims at improving the generalization performance of a learning task with the help of some other related tasks. Although many multi-task learning methods have been proposed, they are all based on the assumption that all tasks share the same data representation. This assumption is too restrictive for general applications. In this paper, we propose a multi-task extension of linear discriminant analysis (LDA), called multi-task discriminant analysis (MTDA), which can deal with learning tasks with different data representations. For each task, MTDA learns a separate transformation which consists of two parts, one specific to the task and one common to all tasks. A by-product of MTDA is that it can alleviate the labeled data deficiency problem of LDA. Moreover, unlike many existing multi-task learning methods, MTDA can handle binary and multi-class problems for each task in a generic way. Experimental results on face recognition show that MTDA consistently outperforms related methods.",
"We describe an approach to domain adaptation that is appropriate exactly in the case when one has enough “target” data to do slightly better than just using only “source” data. Our approach is incredibly simple, easy to implement as a preprocessing step (10 lines of Perl!) and outperforms stateof-the-art approaches on a range of datasets. Moreover, it is trivially extended to a multidomain adaptation problem, where one has data from a variety of different domains."
]
}
|
1208.1829
|
11937046
|
In this paper, we attempts to learn a single metric across two heterogeneous do-mains where source domain is fully labeled and has many samples while targetdomain has only a few labeled samples but abundant unlabeled samples. Tothe best of our knowledge, this task is seldom touched. The proposed learningmodel has a simple underlying motivation: all the samples in both the sourceand the target domains are mapped into a common space, where both theirpriors P(sample)s and their posteriors P(label|sample)s are forced to be re-spectively aligned as much as possible. We show that the two mappings, fromboth the source domain and the target domain to the common space, can bereparameterized into a single positive semi-definite(PSD) matrix. Then we de-velop an efficient Bregman Projection algorithm to optimize the PDS matrixover which a LogDet function is used to regularize. Furthermore, we also showthat this model can be easily kernelized and verify its effectiveness in cross-language retrieval task and cross-domain object recognition task.
|
Although learning among heterogeneous data sources has attracted much attention, works on metric learning across heterogeneous domains are relatively rare. @cite_13 focused on metric learning only for the target domain, but not that across the source and the target domains, thus concern different setting from ours. To the best of our knowledge, @cite_22 's work is the only one closest to ours, although what they learned is, strictly speaking, a similarity function rather than a metric across the source and the target domains. They proposed a Frobenuis-norm regularized large margin model to learn the (linear) similarity function, which, from this paper's perspective, can be seen as only aligning the posteriors rather than the priors. Thus, they don't explore the abundant available unlabeled samples in target domain to leverage the learning.
|
{
"cite_N": [
"@cite_13",
"@cite_22"
],
"mid": [
"2397149706",
"2090923791"
],
"abstract": [
"The problem of transfer learning has recently been of great interest in a variety of machine learning applications. In this paper, we examine a new angle to the transfer learning problem, where we examine the problem of distance function learning. Specifically, we focus on the problem of how our knowledge of distance functions in one domain can be transferred to a new domain. A good semantic understanding of the feature space is critical in providing the domain specific understanding for setting up good distance functions. Unfortunately, not all domains have feature representations which are equally interpretable. For example, in some domains such as text, the semantics of the feature representation are clear, as a result of which it is easy for a domain expert to set up distance functions for specific kinds of semantics. In the case of image data, the features are semantically harder to interpret, and it is harder to set up distance functions, especially for particular semantic criteria. In this paper, we focus on the problem of transfer learning as a way to close the semantic gap between different domains, and show how to use correspondence information between two domains in order to set up distance functions for the semantically more challenging domain.",
"In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks."
]
}
|
1208.0952
|
2950041309
|
With the growing realization that current Internet protocols are reaching the limits of their senescence, a number of on-going research efforts aim to design potential next-generation Internet architectures. Although they vary in maturity and scope, in order to avoid past pitfalls, these efforts seek to treat security and privacy as fundamental requirements. Resilience to Denial-of-Service (DoS) attacks that plague today's Internet is a major issue for any new architecture and deserves full attention. In this paper, we focus on DoS in a specific candidate next-generation Internet architecture called Named-Data Networking (NDN) -- an instantiation of Information-Centric Networking approach. By stressing content dissemination, NDN appears to be attractive and viable approach to many types of current and emerging communication models. It also incorporates some basic security features that mitigate certain attacks. However, NDN's resilience to DoS attacks has not been analyzed to-date. This paper represents the first step towards assessment and possible mitigation of DoS in NDN. After identifying and analyzing several new types of attacks, it investigates their variations, effects and counter-measures. This paper also sheds some light on the long-standing debate about relative virtues of self-certifying, as opposed to human-readable, names.
|
NDN caching performance optimization has been recently investigated with respect to various metrics including energy impact @cite_34 @cite_31 @cite_28 . To the best of our knowledge, the work of Xie, et al @cite_3 is the first to address cache robustness in NDN. It introduces CacheShield, a mechanism that helps routers to prevent caching unpopular content and therefore maximizing the use of cache for popular content.
|
{
"cite_N": [
"@cite_28",
"@cite_31",
"@cite_34",
"@cite_3"
],
"mid": [
"",
"2068309140",
"1973179992",
"1984778122"
],
"abstract": [
"",
"Many systems employ caches to improve performance. While isolated caches have been studied in-depth, multi-cache systems are not well understood, especially in networks with arbitrary topologies. In order to gain insight into and manage these systems, a low-complexity algorithm for approximating their behavior is required. We propose a new algorithm, termed a-Net, that approximates the behavior of multi-cache networks by leveraging existing approximation algorithms for isolated LRU caches. We demonstrate the utility of a-Net using both per- cache and network-wide performance measures. We also perform factor analysis of the approximation error to identify system parameters that determine the precision of a-Net.",
"A variety of proposals call for a new Internet architecture focused on retrieving content by name, but it has not been clear that any of these approaches are general enough to support Internet applications like real-time streaming or email. We present a detailed description of a prototype implementation of one such application -- Voice over IP (VoIP) -- in a content-based paradigm. This serves as a good example to show how content-based networking can offer advantages for the full range of Internet applications, if the architecture has certain key properties.",
"With the advent of content-centric networking (CCN) where contents can be cached on each CCN router, cache robustness will soon emerge as a serious concern for CCN deployment. Previous studies on cache pollution attacks only focus on a single cache server. The question of how caching will behave over a general caching network such as CCN under cache pollution attacks has never been answered. In this paper, we propose a novel scheme called CacheShield for enhancing cache robustness. CacheShield is simple, easy-to-deploy, and applicable to any popular cache replacement policy. CacheShield can effectively improve cache performance under normal circumstances, and more importantly, shield CCN routers from cache pollution attacks. Extensive simulations including trace-driven simulations demonstrate that CacheShield is effective for both CCN and today's cache servers. We also study the impact of cache pollution attacks on CCN and reveal several new observations on how different attack scenarios can affect cache hit ratios unexpectedly."
]
}
|
1208.0688
|
1776432816
|
Generating keys and keeping them secret is critical in secure communications. Due to the "open-air" nature, key distribution is more susceptible to attacks in wireless communications. An ingenious solution is to generate common secret keys by two communicating parties separately without the need of key exchange or distribution, and regenerate them on needs. Recently, it is promising to extract keys by measuring the random variation in wireless channels, e.g., RSS. In this paper, we propose an efficient Secret Key Extraction protocol without Chasing down Errors, SKECE. It establishes common cryptographic keys for two communicating parties in wireless networks via the realtime measurement of Channel State Information (CSI). It outperforms RSS-based approaches for key generation in terms of multiple subcarriers measurement, perfect symmetry in channel, rapid decorrelation with distance, and high sensitivity towards environments. In the SKECE design, we also propose effective mechanisms such as the adaptive key stream generation, leakage resilient consistence validation, and weighted key recombination, to fully exploit the excellent properties of CSI. We implement SKECE on off-the-shelf 802.11n devices and evaluate its performance via extensive experiments. The results demonstrate that SKECE achieves a more than 3x throughput gain in the key generation from one subcarrier in static scenarios, and due to its high efficiency, a 50 reduction on the communication overhead compared to the state-of-the-art RSS based approaches.
|
Encryption and authentication on communication between two parities in wireless networks is helpful for privacy and sensitive data protection @cite_19 @cite_20 @cite_9 @cite_8 . Extracting a shared secret key from the observation and processing of radio channel parameters has been proposed to address this problem without resorting to a fixed infrastructure.
|
{
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_20",
"@cite_8"
],
"mid": [
"2158923961",
"2146021366",
"2119407531",
"2005541978"
],
"abstract": [
"Ad hoc networks are a new wireless networking paradigm for mobile hosts. Unlike traditional mobile wireless networks, ad hoc networks do not rely on any fixed infrastructure. Instead, hosts rely on each other to keep the network connected. Military tactical and other security-sensitive operations are still the main applications of ad hoc networks, although there is a trend to adopt ad hoc networks for commercial uses due to their unique properties. One main challenge in the design of these networks is their vulnerability to security attacks. In this article, we study the threats on ad hoc network faces and the security goals to be achieved. We identify the new challenges and opportunities posed by this new networking environment and explore new approaches to secure its communication. In particular, we take advantage of the inherent redundancy in ad hoc networks-multiple routes between nodes-to defend routing against denial-of-service attacks. We also use replication and new cryptographic schemes, such as threshold cryptography, to build a highly secure and highly available key management service, which terms the core of our security framework.",
"Sensor networks are often deployed in unattended environments, thus leaving these networks vulnerable to false data injection attacks in which an adversary injects false data into the network with the goal of deceiving the base station or depleting the resources of the relaying nodes. Standard authentication mechanisms cannot prevent this attack if the adversary has compromised one or a small number of sensor nodes. In this paper, we present an interleaved hop-by-hop authentication scheme that guarantees that the base station will detect any injected false data packets when no more than a certain number t nodes are compromised. Further, our scheme provides an upper bound B for the number of hops that a false data packet could be forwarded before it is detected and dropped, given that there are up to t colluding compromised nodes. We show that in the worst case B is O(t sup 2 ). Through performance analysis, we show that our scheme is efficient with respect to the security it provides, and it also allows a tradeoff between security and performance.",
"A prerequisite for a secure communication between two nodes in an ad hoc network is that the nodes share a key to bootstrap their trust relationship. In this paper, we present a scalable and distributed protocol that enables two nodes to establish a pairwise shared key on the fly, without requiring the use of any on-line key distribution center. The design of our protocol is based on a novel combination of two techniques - probabilistic key sharing and threshold secret sharing. Our protocol is scalable since every node only needs to possess a small number of keys, independent of the network size, and it is computationally efficient because it only relies on symmetric key cryptography based operations. We show that a pairwise key established between two nodes using our protocol is secure against a collusion attack by up to a certain number of compromised nodes. We also show through a set of simulations that our protocol can be parameterized to meet the desired levels of performance, security and storage for the application under consideration.",
"The wireless body area network has emerged as a new technology for e-healthcare that allows the data of a patient's vital body parameters and movements to be collected by small wearable or implantable sensors and communicated using short-range wireless communication techniques. WBAN has shown great potential in improving healthcare quality, and thus has found a wide range of applications from ubiquitous health monitoring and computer assisted rehabilitation to emergency medical response systems. The security and privacy protection of the data collected from a WBAN, either while stored inside the WBAN or during their transmission outside of the WBAN, is a major unsolved concern, with challenges coming from stringent resource constraints of WBAN devices, and the high demand for both security privacy and practicality usability. In this article we look into two important data security issues: secure and dependable distributed data storage, and fine-grained distributed data access control for sensitive and private patient medical data. We discuss various practical issues that need to be taken into account while fulfilling the security and privacy requirements. Relevant solutions in sensor networks and WBANs are surveyed, and their applicability is analyzed."
]
}
|
1208.0688
|
1776432816
|
Generating keys and keeping them secret is critical in secure communications. Due to the "open-air" nature, key distribution is more susceptible to attacks in wireless communications. An ingenious solution is to generate common secret keys by two communicating parties separately without the need of key exchange or distribution, and regenerate them on needs. Recently, it is promising to extract keys by measuring the random variation in wireless channels, e.g., RSS. In this paper, we propose an efficient Secret Key Extraction protocol without Chasing down Errors, SKECE. It establishes common cryptographic keys for two communicating parties in wireless networks via the realtime measurement of Channel State Information (CSI). It outperforms RSS-based approaches for key generation in terms of multiple subcarriers measurement, perfect symmetry in channel, rapid decorrelation with distance, and high sensitivity towards environments. In the SKECE design, we also propose effective mechanisms such as the adaptive key stream generation, leakage resilient consistence validation, and weighted key recombination, to fully exploit the excellent properties of CSI. We implement SKECE on off-the-shelf 802.11n devices and evaluate its performance via extensive experiments. The results demonstrate that SKECE achieves a more than 3x throughput gain in the key generation from one subcarrier in static scenarios, and due to its high efficiency, a 50 reduction on the communication overhead compared to the state-of-the-art RSS based approaches.
|
Typical secret key generation process consists of three phases: randomness exploration, information reconciliation and privacy amplification @cite_11 . In the randomness exploration, quantization is used to convert measurement values to information bits. A good quantizer can maximize the mutual information between Alice and Bob without information leakage. An algorithm is proposed in @cite_14 to find such a quantizer. The information reconciliation process uses either error correcting codes @cite_7 , or interactive information reconciliation protocols, Cascade @cite_17 . The universal hash functions are widely adopted in @cite_6 @cite_1 @cite_5 to enhance privacy and security.
|
{
"cite_N": [
"@cite_14",
"@cite_11",
"@cite_7",
"@cite_1",
"@cite_6",
"@cite_5",
"@cite_17"
],
"mid": [
"2011582944",
"1973956822",
"2570008916",
"2144381048",
"1996403876",
"2064423787",
"1916886752"
],
"abstract": [
"We evaluate the effectiveness of secret key extraction, for private communication between two wireless devices, from the received signal strength (RSS) variations on the wireless channel between the two devices. We use real world measurements of RSS in a variety of environments and settings. Our experimental results show that (i) in certain environments, due to lack of variations in the wireless channel, the extracted bits have very low entropy making these bits unsuitable for a secret key, (ii) an adversary can cause predictable key generation in these static environments, and (iii) in dynamic scenarios where the two devices are mobile, and or where there is a significant movement in the environment, high entropy bits are obtained fairly quickly. Building on the strengths of existing secret key extraction approaches, we develop an environment adaptive secret key generation scheme that uses an adaptive lossy quantizer in conjunction with Cascade-based information reconciliation [7] and privacy amplification [14]. Our measurements show that our scheme, in comparison to the existing ones that we evaluate, performs the best in terms of generating high entropy bits at a high bit rate. The secret key bit streams generated by our scheme also pass the randomness tests of the NIST test suite [21] that we conduct.",
"We design and analyze a method to extract secret keys from the randomness inherent to wireless channels. We study a channel model for a multipath wireless channel and exploit the channel diversity in generating secret key bits. We compare the key extraction methods based both on entire channel state information (CSI) and on single channel parameter such as the received signal strength indicators (RSSI). Due to the reduction in the degree-of-freedom when going from CSI to RSSI, the rate of key extraction based on CSI is far higher than that based on RSSI. This suggests that exploiting channel diversity and making CSI information available to higher layers would greatly benefit the secret key generation. We propose a key generation system based on low-density parity-check (LDPC) codes and describe the design and performance of two systems: one based on binary LDPC codes and the other (useful at higher signal-to-noise ratios) based on four-ary LDPC codes.",
"We provide formal definitions and efficient secure techniques for turning biometric information into keys usable for any cryptographic application, and reliably and securely authenticating biometric data.",
"Recently, several research contributions have justified that wireless communication is not only a security burden. Its unpredictable and erratic nature can also be turned against an adversary and used to augment conventional security protocols, especially key agreement. In this paper, we are inspired by promising studies on such key agreement schemes, yet aim for releasing some of their limiting assumptions. We demonstrate the feasibility of our scheme within performance-limited wireless sensor networks. The central idea is to use the reciprocity of the wireless channel response between two transceivers as a correlated random variable. Doing so over several frequencies results in a random vector from which a shared secret is extracted. By employing error correction techniques, we are able to control the trade-off between the amount of secrecy and the robustness of our key agreement protocol. To evaluate its applicability, the protocol is implemented on MicaZ sensor nodes and analyzed in indoor environments. Further, these experiments provide insights into realistic channel behavior, available information entropy, and show a high rate of successful key agreements, up to 95 .",
"For pt. II see ibid., vol.49, no.4, p.832-38 (2003). Here, we consider the special case where the legitimate partners already share a mutual string which might, however, be partially known to the adversary. The problem of generating a secret key in this case has been well studied in the passive-adversary model - for instance, in the context of quantum key agreement - under the name of privacy amplification. We consider the same problem with respect to an active adversary and propose two protocols, one based on universal hashing and one based on extractors, allowing for privacy amplification secure against an adversary whose knowledge about the initial partially secret string is limited to one third of the length of this string. Our results are based on novel techniques for authentication secure even against adversaries knowing a substantial amount of the \"secret\" key.",
"We show that the existence of one-way functions is necessary and sufficient for the existence of pseudo-random generators in the following sense. Let ƒ be an easily computable function such that when x is chosen randomly: (1) from ƒ( x ) it is hard to recover an x 1 with ƒ( x 1 ) = ƒ( x ) by a small circuit, or; (2) ƒ has small degeneracy and from ƒ( x ) it is hard to recover x by a fast algorithm. From one-way functions of type (1) or (2) we show how to construct pseudo-random generators secure against small circuits or fast algorithms, respectively, and vice-versa. Previous results show how to construct pseudo-random generators from one-way functions that have special properties ([Blum, Micali 82], [Yao 82], [Levin 85], [Goldreich, Krawczyk, Luby 88]). We use the results of [Goldreich, Levin 89] in an essential way.",
"Assuming that Alice and Bob use a secret noisy channel (modelled by a binary symmetric channel) to send a key, reconciliation is the process of correcting errors between Alice's and Bob's version of the key. This is done by public discussion, which leaks some information about the secret key to an eavesdropper. We show how to construct protocols that leak a minimum amount of information. However this construction cannot be implemented efficiently. If Alice and Bob are willing to reveal an arbitrarily small amount of additional information (beyond the minimum) then they can implement polynomial-time protocols. We also present a more efficient protocol, which leaks an amount of information acceptably close to the minimum possible for sufficiently reliable secret channels (those with probability of any symbol being transmitted incorrectly as large as 15 ). This work improves on earlier reconciliation approaches [R, BBR, BBBSS]."
]
}
|
1208.0688
|
1776432816
|
Generating keys and keeping them secret is critical in secure communications. Due to the "open-air" nature, key distribution is more susceptible to attacks in wireless communications. An ingenious solution is to generate common secret keys by two communicating parties separately without the need of key exchange or distribution, and regenerate them on needs. Recently, it is promising to extract keys by measuring the random variation in wireless channels, e.g., RSS. In this paper, we propose an efficient Secret Key Extraction protocol without Chasing down Errors, SKECE. It establishes common cryptographic keys for two communicating parties in wireless networks via the realtime measurement of Channel State Information (CSI). It outperforms RSS-based approaches for key generation in terms of multiple subcarriers measurement, perfect symmetry in channel, rapid decorrelation with distance, and high sensitivity towards environments. In the SKECE design, we also propose effective mechanisms such as the adaptive key stream generation, leakage resilient consistence validation, and weighted key recombination, to fully exploit the excellent properties of CSI. We implement SKECE on off-the-shelf 802.11n devices and evaluate its performance via extensive experiments. The results demonstrate that SKECE achieves a more than 3x throughput gain in the key generation from one subcarrier in static scenarios, and due to its high efficiency, a 50 reduction on the communication overhead compared to the state-of-the-art RSS based approaches.
|
There are also many works on exploiting physical channel randomness feature to generate secret key @cite_10 @cite_3 @cite_12 . The authors in @cite_0 discuss the condition of generating secure keys and propose a solution to extract a secret key from unauthenticated wireless channels using channel impulse response and amplitude measurements. The authors in @cite_14 summarize the processes needed for key extraction, give their choices of the methods in every process, and conduct extensive experiments to show the properties of RSS in real environment.
|
{
"cite_N": [
"@cite_14",
"@cite_3",
"@cite_0",
"@cite_10",
"@cite_12"
],
"mid": [
"2011582944",
"2138009247",
"2054496355",
"2131110609",
"2085428487"
],
"abstract": [
"We evaluate the effectiveness of secret key extraction, for private communication between two wireless devices, from the received signal strength (RSS) variations on the wireless channel between the two devices. We use real world measurements of RSS in a variety of environments and settings. Our experimental results show that (i) in certain environments, due to lack of variations in the wireless channel, the extracted bits have very low entropy making these bits unsuitable for a secret key, (ii) an adversary can cause predictable key generation in these static environments, and (iii) in dynamic scenarios where the two devices are mobile, and or where there is a significant movement in the environment, high entropy bits are obtained fairly quickly. Building on the strengths of existing secret key extraction approaches, we develop an environment adaptive secret key generation scheme that uses an adaptive lossy quantizer in conjunction with Cascade-based information reconciliation [7] and privacy amplification [14]. Our measurements show that our scheme, in comparison to the existing ones that we evaluate, performs the best in terms of generating high entropy bits at a high bit rate. The secret key bit streams generated by our scheme also pass the randomness tests of the NIST test suite [21] that we conduct.",
"We present three unconventional approaches to keying variable management. The first approach is based on using a public key cryptosystem (PKC) that is breakable in short, but on average less, time than it takes to set up an ultrawide bandwidth modem that is then used to transport a keying variable for a classical cryptosystem. The second concept proposes using the characteristics of an urban UHF radio channel, determined by mutual sounding, as the cryptovariable. The third concept encourages research into ill-conditioned problems as potentially fruitful ground for PKCs not based on finite field arithmetic. >",
"Securing communications requires the establishment of cryptographic keys, which is challenging in mobile scenarios where a key management infrastructure is not always present. In this paper, we present a protocol that allows two users to establish a common cryptographic key by exploiting special properties of the wireless channel: the underlying channel response between any two parties is unique and decorrelates rapidly in space. The established key can then be used to support security services (such as encryption) between two users. Our algorithm uses level-crossings and quantization to extract bits from correlated stochastic processes. The resulting protocol resists cryptanalysis by an eavesdropping adversary and a spoofing attack by an active adversary without requiring an authenticated channel, as is typically assumed in prior information-theoretic key establishment schemes. We evaluate our algorithm through theoretical and numerical studies, and provide validation through two complementary experimental studies. First, we use an 802.11 development platform with customized logic that extracts raw channel impulse response data from the preamble of a format-compliant 802.11a packet. We show that it is possible to practically achieve key establishment rates of 1 bit sec in a real, indoor wireless environment. To illustrate the generality of our method, we show that our approach is equally applicable to per-packet coarse signal strength measurements using off-the-shelf 802.11 hardware.",
"The multipath-rich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is time-varying, location-sensitive, and uniquely shared by a given transmitter-receiver pair. The complexity associated with a richly scattering environment implies that the short-term fading process is inherently hard to predict and best modeled stochastically, with rapid decorrelation properties in space, time, and frequency. In this paper, we demonstrate how the channel state between a wireless transmitter and receiver can be used as the basis for building practical secret key generation protocols between two entities. We begin by presenting a scheme based on level crossings of the fading process, which is well-suited for the Rayleigh and Rician fading models associated with a richly scattering environment. Our level crossing algorithm is simple, and incorporates a self-authenticating mechanism to prevent adversarial manipulation of message exchanges during the protocol. Since the level crossing algorithm is best suited for fading processes that exhibit symmetry in their underlying distribution, we present a second and more powerful approach that is suited for more general channel state distributions. This second approach is motivated by observations from quantizing jointly Gaussian processes, but exploits empirical measurements to set quantization boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate. We validate both proposed protocols through experimentations using a customized 802.11a platform, and show for the typical WiFi channel that reliable secret key establishment can be accomplished at rates on the order of 10 b s.",
"Abstract Hassan, A. A., Stark, W. E., Hershey, J. E., and Chennakeshu, S., Cryptographic Key Agreement for Mobile Radio, Digital Signal Processing 6 (1996), 207–212. The problem of establishing a mutually held secret cryptographic key using a radio channel is addressed. The performance of a particular key distribution system is evaluated for a practical mobile radio communications system. The performance measure taken is probabilistic, and different from the Shannon measure of perfect secrecy. In particular, it is shown that by using a channel decoder, the probability of two users establishing a secret key is close to one, while the probability of an adversary generating the same key is close to zero. The number of possible keys is large enough that exhaustive search is impractical."
]
}
|
1208.0688
|
1776432816
|
Generating keys and keeping them secret is critical in secure communications. Due to the "open-air" nature, key distribution is more susceptible to attacks in wireless communications. An ingenious solution is to generate common secret keys by two communicating parties separately without the need of key exchange or distribution, and regenerate them on needs. Recently, it is promising to extract keys by measuring the random variation in wireless channels, e.g., RSS. In this paper, we propose an efficient Secret Key Extraction protocol without Chasing down Errors, SKECE. It establishes common cryptographic keys for two communicating parties in wireless networks via the realtime measurement of Channel State Information (CSI). It outperforms RSS-based approaches for key generation in terms of multiple subcarriers measurement, perfect symmetry in channel, rapid decorrelation with distance, and high sensitivity towards environments. In the SKECE design, we also propose effective mechanisms such as the adaptive key stream generation, leakage resilient consistence validation, and weighted key recombination, to fully exploit the excellent properties of CSI. We implement SKECE on off-the-shelf 802.11n devices and evaluate its performance via extensive experiments. The results demonstrate that SKECE achieves a more than 3x throughput gain in the key generation from one subcarrier in static scenarios, and due to its high efficiency, a 50 reduction on the communication overhead compared to the state-of-the-art RSS based approaches.
|
It can also be exploited for device pairing @cite_15 and authentication @cite_16 . Extracting secret keys over MIMO has been introduced in @cite_2 .
|
{
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_2"
],
"mid": [
"2019258915",
"2115735006",
"2122988249"
],
"abstract": [
"Forming secure associations between wireless devices that do not share a prior trust relationship is an important problem. This paper presents ProxiMate, a system that allows wireless devices in proximity to securely pair with one another autonomously by generating a common cryptographic key directly from their shared time-varying wireless environment. The shared key synthesized by ProxiMate can be used by the devices to authenticate each others' physical proximity and then to communicate confidentially. Unlike traditional pairing approaches such as Diffie-Hellman, ProxiMate is secure against a computationally unbounded adversary and its computational complexity is linear in the size of the key. We evaluate ProxiMate using an experimental prototype built using an open-source software-defined platform and demonstrate its effectiveness in generating common secret bits. We further show that it is possible to speed up secret key synthesis by monitoring multiple RF sources simultaneously or by shaking together the devices that need to be paired. Finally, we show that ProxiMate is resistant to even the most powerful attacker who controls the public RF source used by the legitimate devices for pairing.",
"The wireless medium contains domain-specific information that can be used to complement and enhance traditional security mechanisms. In this paper we propose ways to exploit the spatial variability of the radio channel response in a rich scattering environment, as is typical of indoor environments. Specifically, we describe a physical-layer authentication algorithm that utilizes channel probing and hypothesis testing to determine whether current and prior communication attempts are made by the same transmit terminal. In this way, legitimate users can be reliably authenticated and false users can be reliably detected. We analyze the ability of a receiver to discriminate between transmitters (users) according to their channel frequency responses. This work is based on a generalized channel response with both spatial and temporal variability, and considers correlations among the time, frequency and spatial domains. Simulation results, using the ray-tracing tool WiSE to generate the time-averaged response, verify the efficacy of the approach under realistic channel conditions, as well as its capability to work under unknown channel variations.",
"Information theoretic limits for random key generation in multiple-input multiple-output (MIMO) wireless systems exhibiting a reciprocal channel response are investigated experimentally with a new three-node MIMO measurement campaign. As background, simple expressions are presented for the number of available key bits, as well as the number of bits that are secure from a close eavesdropper. Two methods for generating secret keys are analyzed in the context of MIMO channels and their mismatch rate and efficiency are derived. A new wideband indoor MIMO measurement campaign in the 2.51- to 2.59-GHz band is presented, whose purpose is to study the number of available key bits in both line-of-sight and nonline-of-sight environments. Application of the key generation methods to measured propagation channels indicates key generation rates that can be obtained in practice for four-element arrays."
]
}
|
1208.0505
|
2950920755
|
We study delay tolerant networking (DTN) and in particular, its capacity to store, carry and forward messages so that the messages eventually reach their final destinations. We approach this broad question in the framework of percolation theory. To this end, we assume an elementary mobility model, where nodes arrive to an infinite plane according to a Poisson point process, move a certain distance L, and then depart. In this setting, we characterize the mean density of nodes required to support DTN style networking. In particular, under the given assumptions, we show that DTN is feasible when the mean node degree is greater than 4 e(g), where parameter g=L d is the ratio of the distance L to the transmission range d, and e(g) is the critical reduced number density of tilted cylinders in a directed continuum percolation model. By means of Monte Carlo simulations, we give numerical values for e(g). The asymptotic behavior of e(g) when g tends to infinity is also derived from a fluid flow analysis.
|
We assume epidemic routing which was proposed by Vahdat and Becker in @cite_2 . The operational principle is very simple: whenever two nodes meet they exchange the messages only one of them has. The performance analysis of the epidemic routing often assumes exponentially distributed (i.i.d.) inter-meeting times leading to Markovian models @cite_7 . As the size of the network grows, solving Markov chains becomes infeasible, and , in @cite_21 obtained ODEs as a fluid limit of such Markovian models. In above work, the spatial dimension has been abstracted away.
|
{
"cite_N": [
"@cite_21",
"@cite_7",
"@cite_2"
],
"mid": [
"2109528718",
"2105273407",
"1572481965"
],
"abstract": [
"In this paper, we develop a rigorous, unified framework based on ordinary differential equations (ODEs) to study epidemic routing and its variations. These ODEs can be derived as limits of Markovian models under a natural scaling as the number of nodes increases. While an analytical study of Markovian models is quite complex and numerical solution impractical for large networks, the corresponding ODE models yield closed-form expressions for several performance metrics of interest, and a numerical solution complexity that does not increase with the number of nodes. Using this ODE approach, we investigate how resources such as buffer space and the number of copies made for a packet can be traded for faster delivery, illustrating the differences among various forwarding and recovery schemes considered. We perform model validations through simulation studies. Finally we consider the effect of buffer management by complementing the forwarding models with Markovian and fluid buffer models.",
"A stochastic model is introduced that accurately models the message delay in mobile ad hoc networks where nodes relay messages and the networks are sparsely populated. The model has only two input parameters: the number of nodes and the parameter of an exponential distribution which describes the time until two random mobiles come within communication range of one another. Closed-form expressions are obtained for the Laplace-Stieltjes transform of the message delay, defined as the time needed to transfer a message between a source and a destination. From this we derive both a closed-form expression and an asymptotic approximation (as a function of the number of nodes) of the expected message delay. As an additional result, the probability distribution function is obtained for the number of copies of the message at the time the message is delivered. These calculations are carried out for two protocols: the two-hop multicopy and the unrestricted multicopy protocols. It is shown that despite its simplicity, the model accurately predicts the message delay for both relay strategies for a number of mobility models (the random waypoint, random direction and the random walker mobility models).",
"Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100 of messages with reasonable aggregate resource consumption in a number of interesting scenarios."
]
}
|
1208.0505
|
2950920755
|
We study delay tolerant networking (DTN) and in particular, its capacity to store, carry and forward messages so that the messages eventually reach their final destinations. We approach this broad question in the framework of percolation theory. To this end, we assume an elementary mobility model, where nodes arrive to an infinite plane according to a Poisson point process, move a certain distance L, and then depart. In this setting, we characterize the mean density of nodes required to support DTN style networking. In particular, under the given assumptions, we show that DTN is feasible when the mean node degree is greater than 4 e(g), where parameter g=L d is the ratio of the distance L to the transmission range d, and e(g) is the critical reduced number density of tilted cylinders in a directed continuum percolation model. By means of Monte Carlo simulations, we give numerical values for e(g). The asymptotic behavior of e(g) when g tends to infinity is also derived from a fluid flow analysis.
|
@cite_16 consider the propagation speed of the information in DTN for a fixed set of nodes moving in a finite region. The spatial dimension is explicitly present in their formulation. Somewhat related, Grossglauser and Tse @cite_18 studied the performance of adhoc networks under mobility. They focused on connected networks, where nodes can either communicate directly or via multi-hop connections, and showed that mobility increases the overall capacity in the network given the users tolerate additional delays.
|
{
"cite_N": [
"@cite_18",
"@cite_16"
],
"mid": [
"2149959815",
"1483953177"
],
"abstract": [
"The capacity of ad hoc wireless networks is constrained by the mutual interference of concurrent transmissions between nodes. We study a model of an ad hoc network where n nodes communicate in random source-destination pairs. These nodes are assumed to be mobile. We examine the per-session throughput for applications with loose delay constraints, such that the topology changes over the time-scale of packet delivery. Under this assumption, the per-user throughput can increase dramatically when nodes are mobile rather than fixed. This improvement can be achieved by exploiting a form of multiuser diversity via packet relaying.",
"The goal of this paper is to increase our understanding of the fundamental performance limits of mobile and Delay Tolerant Networks (DTNs), where end-to-end multihop paths may not exist and communication routes may only be available through time and mobility. We use analytical tools to derive generic theoretical upper bounds for the information propagation speed in large scale mobile and intermittently connected networks. In other words, we upper-bound the optimal performance, in terms of delay, that can be achieved using any routing algorithm. We then show how our analysis can be applied to specific mobility models to obtain specific analytical estimates. In particular, in 2-D networks, when nodes move at a maximum speed v and their density is small (the network is sparse and asymptotically almost surely disconnected), we prove that the information propagation speed is upper bounded by (1 + O(v2))v in random waypoint-like models, while it is upper bounded by O(√vvv) for other mobility models (random walk, Brownian motion). We also present simulations that confirm the validity of the bounds in these scenarios. Finally, we generalize our results to 1-D and 3-D networks."
]
}
|
1208.0505
|
2950920755
|
We study delay tolerant networking (DTN) and in particular, its capacity to store, carry and forward messages so that the messages eventually reach their final destinations. We approach this broad question in the framework of percolation theory. To this end, we assume an elementary mobility model, where nodes arrive to an infinite plane according to a Poisson point process, move a certain distance L, and then depart. In this setting, we characterize the mean density of nodes required to support DTN style networking. In particular, under the given assumptions, we show that DTN is feasible when the mean node degree is greater than 4 e(g), where parameter g=L d is the ratio of the distance L to the transmission range d, and e(g) is the critical reduced number density of tilted cylinders in a directed continuum percolation model. By means of Monte Carlo simulations, we give numerical values for e(g). The asymptotic behavior of e(g) when g tends to infinity is also derived from a fluid flow analysis.
|
In contrast, in @cite_5 , we assumed and showed that opportunistic content dissemination schemes, such as the floating content, can be analyzed by using a three-dimensional continuum percolation model. In this paper, we adapt the same approach, but instead of approximating the process by undirected percolation model, we consider the characterizing the dissemination of messages by exactly in a DTN network. There are less results available for the directed percolation models than there are for the normal undirected percolation, and results for directed continuum models are even more scarce. The basic directed cases are the bond and site percolation models, where, e.g., bonds have fixed directions (e.g., up and right in a square lattice) @cite_15 @cite_14 . Directed models have been also studied in the context of scale-free networks @cite_6 , which arise in Internet, social networks etc.
|
{
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_14",
"@cite_6"
],
"mid": [
"1978084230",
"2022491887",
"",
"2026313208"
],
"abstract": [
"We consider the critical percolation threshold for aligned cylinders, which provides a lower bound for the required node degree for the permanence of information in opportunistic networking. The height of a cylinder corresponds to the time a node is active in its current location. By means of Monte Carlo simulations, we obtain an accurate numerical estimate for the critical reduced number density, ηc ≈ 0.3312(1) for constant height cylinders. This threshold is the same for all ratios of the height to the diameter of the base, and corresponds to the mean node degree of 1.3248 in opportunistic networking, which is clearly below the percolation threshold of 4.51 above which a gigantic connected component emerges in the network.",
"A method for conversion of directed site animals to their bond counterparts is used to significantly extend the number of such configurations in 2, 3, 4 dimensions, obtain their loop partition and study the singularity structure (for both dominant and subdominant singularities) of the corresponding dilute polymer transitions. A special property of bond directed configurations (equivalence lence of loop and bond perimeter partition) is exploited to obtain longer susceptibility series in 3 and 4 dimensions and the gap exponent estimate is refined into the intervals TANO",
"",
"Many complex networks in nature have directed links, a property that affects the network’s navigability and large-scale topology. Here we study the percolation properties of such directed scale-free networks with correlated in and out degree distributions. We derive a phase diagram that indicates the existence of three regimes, determined by the values of the degree exponents. In the first regime we regain the known directed percolation mean field exponents. In contrast, the second and third regimes are characterized by anomalous exponents, which we calculate analytically. In the third regime the network is resilient to random dilution, i.e., the percolation threshold is pc!1."
]
}
|
1208.0412
|
2951282654
|
Comparing to well protected data frames, Wi-Fi management frames (MFs) are extremely vulnerable to various attacks. Since MFs are transmitted without encryption, attackers can forge them easily. Such attacks can be detected in cooperative environment such as Wireless Intrusion Detection System (WIDS). However, in non-cooperative environment it is difficult for a single station to identify these spoofing attacks using Received Signal Strength (RSS)-based detection, due to the strong correlation of RSS to both the transmission power (Txpower) and the location of the sender. By exploiting some unique characteristics (i.e., rapid spatial decorrelation, independence of Txpower, and much richer dimensions) of the Channel State Information (CSI), a standard feature in 802.11n Specification, we design a prototype, called CSITE, to authenticate the Wi-Fi management frames by a single station without external support. Our design CSITE, built upon off-the-shelf hardware, achieves precise spoofing detection without collaboration and in-advance finger-print. Several novel techniques are designed to address the challenges caused by user mobility and channel dynamics. To verify the performances of our solution, we implement a prototype of our design and conduct extensive evaluations in various scenarios. Our test results show that our design significantly outperforms the RSS-based method in terms of accuracy, robustness, and efficiency: we observe about 8 times improvement by CSITE over RSS-based method on the falsely accepted attacking frames.
|
There are growing interests in authentication, location distinction and even localization based on physical layer information. Channel Impulse Response (CIR) has been used to provide robust location distinction in @cite_20 @cite_21 . There are some works @cite_26 @cite_23 @cite_3 that went further trying to provide precise indoor localization either by identifying the Line-Of-Sight components or by identifying cluster information in CSI.
|
{
"cite_N": [
"@cite_26",
"@cite_21",
"@cite_3",
"@cite_23",
"@cite_20"
],
"mid": [
"2006488982",
"2166735779",
"2183249040",
"2005059864",
"2123541927"
],
"abstract": [
"The rapid growth of location-based applications has spurred extensive research on localization. Nonetheless, indoor localization remains an elusive problem mostly because the accurate techniques come at the expense of cumbersome war-driving or additional infrastructure. Towards a solution that is easier to adopt, we propose SpinLoc that is free from these requirements. Instead, SpinLoc levies a little bit of the localization burden on the humans, expecting them to rotate around once to estimate their locations. Our main observation is that wireless signals attenuate differently, based on how the human body is blocking the signal. We find that this attenuation can reveal the directions of the APs in indoor environments, ultimately leading to localization. This paper studies the feasibility of SpinLoc in real-world indoor environments using off-the-shelf WiFi hardware. Our preliminary evaluation demonstrates accuracies comparable toschemes that rely on expensive war-driving.",
"Location distinction is the ability to determine when a device has changed its position. We explore the opportunity to use sophisticated PHY-layer measurements in wireless networking systems for location distinction. We first compare two existing location distinction methods - one based on channel gains of multi-tonal probes, and another on channel impulse response. Next, we combine the benefits of these two methods to develop a new link measurement that we call the complex temporal signature. We use a 2.4 GHz link measurement data set, obtained from CRAWDAD [10], to evaluate the three location distinction methods. We find that the complex temporal signature method performs significantly better compared to the existing methods. We also perform new measurements to understand and model the temporal behavior of link signatures over time. We integrate our model in our location distinction mechanism and significantly reduce the probability of false alarms due to temporal variations of link signatures.",
"This paper explores the viability of precise indoor localization using physical layer information in WiFi systems. We find evidence that channel responses from multiple OFDM subcarriers can be a promising location signature. While these signatures certainly vary over time and environmental mobility, we notice that their core structure preserves certain properties that are amenable to localization. We attempt to harness these opportunities through a functional system called PinLoc, implemented on off-the-shelf Intel 5300 cards. We evaluate the system in a busy engineering building, a crowded student center, a cafeteria, and at the Duke University museum, and demonstrate localization accuracies in the granularity of 1m x 1m boxes, called “spots”. Results from 100 spots show that PinLoc is able to localize users to the correct spot with 89 mean accuracy, while incurring less than 6 false positives. We believe this is an important step forward, compared to the best indoor localization schemes of today, such as Horus.",
"Indoor positioning systems have received increasing attention for supporting location-based services in indoor environments. WiFi-based indoor localization has been attractive due to its open access and low cost properties. However, the distance estimation based on received signal strength indicator (RSSI) is easily affected by the temporal and spatial variance due to the multipath effect, which contributes to most of the estimation errors in current systems. How to eliminate such effect so as to enhance the indoor localization performance is a big challenge. In this work, we analyze this effect across the physical layer and account for the undesirable RSSI readings being reported. We explore the frequency diversity of the subcarriers in OFDM systems and propose a novel approach called FILA, which leverages the channel state information (CSI) to alleviate multipath effect at the receiver. We implement the FILA system on commercial 802.11 NICs, and then evaluate its performance in different typical indoor scenarios. The experimental results show that the accuracy and latency of distance calculation can be significantly enhanced by using CSI. Moreover, FILA can significantly improve the localization accuracy compared with the corresponding RSSI approach.",
"The ability of a receiver to determine when a transmitter has changed location is important for energy conservation in wireless sensor networks, for physical security of radio-tagged objects, and for wireless network security in detection of replication attacks. In this paper, we propose using a measured temporal link signature to uniquely identify the link between a transmitter and a receiver. When the transmitter changes location, or if an attacker at a different location assumes the identity of the transmitter, the proposed link distinction algorithm reliably detects the change in the physical channel. This detection can be performed at a single receiver or collaboratively by multiple receivers. We record over 9,000 link signatures at different locations and over time to demonstrate that our method significantly increases the detection rate and reduces the false alarm rate, in comparison to existing methods."
]
}
|
1208.0412
|
2951282654
|
Comparing to well protected data frames, Wi-Fi management frames (MFs) are extremely vulnerable to various attacks. Since MFs are transmitted without encryption, attackers can forge them easily. Such attacks can be detected in cooperative environment such as Wireless Intrusion Detection System (WIDS). However, in non-cooperative environment it is difficult for a single station to identify these spoofing attacks using Received Signal Strength (RSS)-based detection, due to the strong correlation of RSS to both the transmission power (Txpower) and the location of the sender. By exploiting some unique characteristics (i.e., rapid spatial decorrelation, independence of Txpower, and much richer dimensions) of the Channel State Information (CSI), a standard feature in 802.11n Specification, we design a prototype, called CSITE, to authenticate the Wi-Fi management frames by a single station without external support. Our design CSITE, built upon off-the-shelf hardware, achieves precise spoofing detection without collaboration and in-advance finger-print. Several novel techniques are designed to address the challenges caused by user mobility and channel dynamics. To verify the performances of our solution, we implement a prototype of our design and conduct extensive evaluations in various scenarios. Our test results show that our design significantly outperforms the RSS-based method in terms of accuracy, robustness, and efficiency: we observe about 8 times improvement by CSITE over RSS-based method on the falsely accepted attacking frames.
|
A new attack against PHY-layer authentication called was identified in @cite_19 . However, such attack is neither easy to launch due to the existence of , and it is not likely to succeed due to the MIMO configuration which introduces richer channel information.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"2092266108"
],
"abstract": [
"Wireless link signature is a physical layer authentication mechanism, which uses the unique wireless channel characteristics between a transmitter and a receiver to provide authentication of wireless channels. A vulnerability of existing link signature schemes has been identified by introducing a new attack, called mimicry attack. To defend against the mimicry attack, we propose a novel construction for wireless link signature, called time-synched link signature, by integrating cryptographic protection and time factor into traditional wireless link signatures. We also evaluate the mimicry attacks and the time-synched link signature scheme on the USRP2 platform running GNURadio. The experimental results demonstrate the effectiveness of time-synched link signature."
]
}
|
1208.0396
|
1538935181
|
We present a practical algorithm for the cyclic longest common subsequence (CLCS) problem that runs in O(mn) time, where m and n are the lengths of the two input strings. While this is not necessarily an asymptotic improvement over the existing record, it is far simpler to understand and to implement.
|
The cyclic version of @math has seen more recent improvements. The first major improvement comes from @cite_1 , which uses a divide-and-conquer strategy to solve the problem in @math runtime. Since then, this approach has been applied to generalizations to the problem, and practical optimizations have been discovered; see @cite_2 for example. We describe the setup used in these papers in our preliminaries, though we build a different approach on top of this setup to arrive at our new algorithm. @cite_0 provides a solution that runs in @math time on arbitrary inputs, with asymptotically better performance on similar inputs; however, this solution requires considerable amounts of machinery both to understand and to implement.
|
{
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_2"
],
"mid": [
"",
"2021202955",
"2122499640"
],
"abstract": [
"",
"We present an O(nm log m) algorithm to solve the string-to-string correction problem for cyclic strings.",
"The matching of planar shapes can be cast as a problem of finding the shortest path through a graph spanned by the two shapes, where the nodes of the graph encode the local similarity of respective points on each contour. While this problem can be solved using dynamic time warping, the complete search over the initial correspondence leads to cubic runtime in the number of sample points. In this paper, we cast the shape matching problem as one of finding the shortest circular path on a torus. We propose an algorithm to determine this shortest cycle which has provably sub-cubic runtime. Numerical experiments demonstrate that the proposed algorithm provides faster shape matching than previous methods. As an application, we show that it allows to efficiently compute a clustering of a shape data base."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.