aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1401.6399
|
2951870329
|
Sorted lists of integers are commonly used in inverted indexes and database systems. They are often compressed in memory. We can use the SIMD instructions available in common processors to boost the speed of integer compression schemes. Our S4-BP128-D4 scheme uses as little as 0.7 CPU cycles per decoded integer while still providing state-of-the-art compression. However, if the subsequent processing of the integers is slow, the effort spent on optimizing decoding speed can be wasted. To show that it does not have to be so, we (1) vectorize and optimize the intersection of posting lists; (2) introduce the SIMD Galloping algorithm. We exploit the fact that one SIMD instruction can compare 4 pairs of integers at once. We experiment with two TREC text collections, GOV2 and ClueWeb09 (Category B), using logs from the TREC million-query track. We show that using only the SIMD instructions ubiquitous in all modern CPUs, our techniques for conjunctive queries can double the speed of a state-of-the-art approach.
|
Our work is focused on commodity desktop processors. Compression and intersection of integer lists using a graphics processing unit (GPU) has also received attention. @cite_18 improved the intersection speed using a : essentially, they divide up one list into small blocks and intersect these blocks in parallel with the other array. On conjunctive queries, @cite_18 found their GPU implementation to be only marginally superior to a CPU implementation ( @math 15 faster) despite the data was already loaded in GPU's global memory. They do, however, get impressive speed gains ( @math ) on disjunctive queries.
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"2170707555"
],
"abstract": [
"Web search engines are facing formidable performance challenges due to data sizes and query loads. The major engines have to process tens of thousands of queries per second over tens of billions of documents. To deal with this heavy workload, such engines employ massively parallel systems consisting of thousands of machines. The significant cost of operating these systems has motivated a lot of recent research into more efficient query processing mechanisms. We investigate a new way to build such high performance IR systems using graphical processing units (GPUs). GPUs were originally designed to accelerate computer graphics applications through massive on-chip parallelism. Recently a number of researchers have studied how to use GPUs for other problem domains such as databases and scientific computing. Our contribution here is to design a basic system architecture for GPU-based high-performance IR, to develop suitable algorithms for subtasks such as inverted list compression, list intersection, and top- @math scoring, and to show how to achieve highly efficient query processing on GPU-based systems. Our experimental results for a prototype GPU-based system on @math million web pages indicate that significant gains in query processing performance can be obtained."
]
}
|
1401.5742
|
2345718780
|
This paper examines the close interplay between cooperation and adaptation for distributed detection schemes over fully decentralized networks. The combined attributes of cooperation and adaptation are necessary to enable networks of detectors to continually learn from streaming data and to continually track drifts in the state of nature when deciding in favor of one hypothesis or another. The results in this paper establish a fundamental scaling law for the steady-state probabilities of miss detection and false alarm in the slow adaptation regime, when the agents interact with each other according to distributed strategies that employ small constant step-sizes. The latter are critical to enable continuous adaptation and learning. This paper establishes three key results. First, it is shown that the output of the collaborative process at each agent has a steady-state distribution. Second, it is shown that this distribution is asymptotically Gaussian in the slow adaptation regime of small step-sizes. Third, by carrying out a detailed large deviations analysis, closed-form expressions are derived for the decaying rates of the false-alarm and miss-detection probabilities. Interesting insights are gained from these expressions. In particular, it is verified that as the step-size @math decreases, the error probabilities are driven to zero exponentially fast as functions of @math , and that the exponents governing the decay increase linearly in the number of agents. It is also verified that the scaling laws governing the errors of detection and the errors of estimation over the network behave very differently, with the former having exponential decay proportional to @math , while the latter scales linearly with decay proportional to @math . Moreover, and interestingly, it is shown that the cooperative strategy allows each agent to reach the same detection performance, in terms of detection error exponents, of a centralized stochastic-gradient solution. The results of this paper are illustrated by applying them to canonical distributed detection problems.
|
The literature on distributed detection is definitely rich, see, e.g., @cite_38 @cite_44 @cite_16 @cite_17 @cite_7 @cite_6 @cite_28 @cite_10 as useful entry points on the topic. A distinguishing feature of our approach is its emphasis on adaptive distributed detection techniques that respond to streaming data in real-time. We address this challenging problem with reference to the fully decentralized setting, where no fusion center is admitted, and the agents cooperate through local interaction and consultation steps.
|
{
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_28",
"@cite_6",
"@cite_44",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"2163382065",
"2133380705",
"1982401839",
"1977862970",
"1525038591",
"2115758294",
"2100169223",
"2098188576"
],
"abstract": [
"In a decentralized hypothesis testing network, several peripheral nodes observe an environment and communicate their observations to a central node for the final decision. The presence of capacity constraints introduces theoretical and practical problems. The following problem is addressed: given that the peripheral encoders that satisfy these constraints are scalar quantizers, how should they be designed in order that the central test to be performed on their output indices is most powerful? The scheme is called cooperative design-separate encoding since the quantizers process separate observations but have a common goal; they seek to maximize a system-wide performance measure. The Bhattacharyya distance of the joint index space as such a criterion is suggested, and a design algorithm to optimize arbitrarily many quantizers cyclically is proposed. A simplified version of the algorithm, namely an independent design-separate encoding scheme, where the correlation is either absent or neglected for the sake of simplicity, is outlined. Performances are compared through worked examples. >",
"In this paper, we investigate a binary decentralized detection problem in which a network of wireless sensors provides relevant information about the state of nature to a fusion center. Each sensor transmits its data over a multiple access channel. Upon reception of the information, the fusion center attempts to accurately reconstruct the state of nature. We consider the scenario where the sensor network is constrained by the capacity of the wireless channel over which the sensors are transmitting, and we study the structure of an optimal sensor configuration. For the problem of detecting deterministic signals in additive Gaussian noise, we show that having a set of identical binary sensors is asymptotically optimal, as the number of observations per sensor goes to infinity. Thus, the gain offered by having more sensors exceeds the benefits of getting detailed information from each sensor. A thorough analysis of the Gaussian case is presented along with some extensions to other observation distributions.",
"This paper reviews the classical decentralized decision theory in the light of new constraints and requirements. The central theme that transcends various aspects of signal processing design is that an integrated channel-aware approach needs to be taken for optimal detection performance given the available resources.",
"Detection problems provide a productive starting point for the study of more general statistical inference problems in sensor networks. In this article, the classical framework for decentralized detection is reviewed and argued that, while this framework provides a useful basis for developing a theory for detection in sensor networks, it has serious limitations. The classical framework does not adequately take into account important features of sensor technology and of the communication link between the sensors and the fusion center. An alternative framework for detection in sensor networks that has emerged over the last few years is discussed. Several design and optimization strategies may be gleaned from this new framework",
"1 Introduction.- 1.1 Distributed Detection Systems.- 1.2 Outline of the Book.- 2 Elements of Detection Theory.- 2.1 Introduction.- 2.2 Bayesian Detection Theory.- 2.3 Minimax Detection.- 2.4 Neyman-Pearson Test.- 2.5 Sequential Detection.- 2.6 Constant False Alarm Rate (CFAR) Detection.- 2.7 Locally Optimum Detection.- 3 Distributed Bayesian Detection: Parallel Fusion Network.- 3.1 Introduction.- 3.2 Distributed Detection Without Fusion.- 3.3 Design of Fusion Rules.- 3.4 Detection with Parallel Fusion Network.- 4 Distributed Bayesian Detection: Other Network Topologies.- 4.1 Introduction.- 4.2 The Serial Network.- 4.3 Tree Networks.- 4.4 Detection Networks with Feedback.- 4.5 Generalized Formulation for Detection Networks.- 5 Distributed Detection with False Alarm Rate Constraints.- 5.1 Introduction.- 5.2 Distributed Neyman-Pearson Detection.- 5.3 Distributed CFAR Detection.- 5.4 Distributed Detection of Weak Signals.- 6 Distributed Sequential Detection.- 6.1 Introduction.- 6.2 Sequential Test Performed at the Sensors.- 6.3 Sequential Test Performed at the Fusion Center.- 7 Information Theory and Distributed Hypothesis Testing.- 7.1 Introduction.- 7.2 Distributed Detection Based on Information Theoretic Criterion.- 7.3 Multiterminal Detection with Data Compression.- Selected Bibliography.",
"In this paper basic results on distributed detection are reviewed. In particular we consider the parallel and the serial architectures in some detail and discuss the decision rules obtained from their optimization based an the Neyman-Pearson (NP) criterion and the Bayes formulation. For conditionally independent sensor observations, the optimality of the likelihood ratio test (LRT) at the sensors is established. General comments on several important issues are made including the computational complexity of obtaining the optimal solutions the design of detection networks with more general topologies, and applications to different areas.",
"We consider the problem of classifying among a set of M hypotheses via distributed noisy sensors. The sensors can collaborate over a communication network and the task is to arrive at a consensus about the event after exchanging messages. We apply a variant of belief propagation as a strategy for collaboration to arrive at a solution to the distributed classification problem. We show that the message evolution can be reformulated as the evolution of a linear dynamical system, which is primarily characterized by network connectivity. We show that a consensus to the centralized maximum a posteriori (MAP) estimate can almost always reached by the sensors for any arbitrary network. We then extend these results in several directions. First, we demonstrate that these results continue to hold with quantization of the messages, which is appealing from the point of view of finite bit rates supportable between links. We then demonstrate robustness against packet losses, which implies that optimal decisions can be achieved with asynchronous transmissions as well. Next, we present an account of energy requirements for distributed detection and demonstrate significant improvement over conventional decentralized detection. Finally, extensions to distributed estimation are described",
"Following the foundational work that established basic ideas for optimum distributed defection schemes using multiple sensors (as reviewed in Part I of this two-part review), further work on distributed detection has developed many useful and interesting extensions of the basic concepts. These more recent developments parallel those that arose from the early work on centralized, classical signal detection, resulting in new ideas of asymptotically optimum nonparametric, robust, and sequential centralized detection. Recent developments on these topics in the setting of distributed signal detection are reviewed in the present paper. Results in these directions are important in practice because they allow cases of modeling uncertainty to be addressed, and they provide more efficient detection schemes by optimizing more general performance criteria."
]
}
|
1401.5522
|
2952178437
|
This paper introduces hybrid LU-QR al- gorithms for solving dense linear systems of the form Ax = b. Throughout a matrix factorization, these al- gorithms dynamically alternate LU with local pivoting and QR elimination steps, based upon some robustness criterion. LU elimination steps can be very efficiently parallelized, and are twice as cheap in terms of floating- point operations, as QR steps. However, LU steps are not necessarily stable, while QR steps are always stable. The hybrid algorithms execute a QR step when a robustness criterion detects some risk for instability, and they execute an LU step otherwise. Ideally, the choice between LU and QR steps must have a small computational overhead and must provide a satisfactory level of stability with as few QR steps as possible. In this paper, we introduce several robustness criteria and we establish upper bounds on the growth factor of the norm of the updated matrix incurred by each of these criteria. In addition, we describe the implementation of the hybrid algorithms through an exten- sion of the PaRSEC software to allow for dynamic choices during execution. Finally, we analyze both stability and performance results compared to state-of-the-art linear solvers on parallel distributed multicore platforms.
|
State-of-the-art QR factorizations use multiple eliminators per panel, in order to dramatically reduce the critical path of the algorithm. These algorithms are unconditionally stable, and their parallelization has been fairly well studied on shared memory systems @cite_10 @cite_1 @cite_18 and on parallel distributed systems @cite_4 .
|
{
"cite_N": [
"@cite_1",
"@cite_18",
"@cite_10",
"@cite_4"
],
"mid": [
"1986834688",
"2952906961",
"2095597068",
"1999664467"
],
"abstract": [
"With the emergence of thread-level parallelism as the primary means for continued performance improvement, the programmability issue has reemerged as an obstacle to the use of architectural advances. We argue that evolving legacy libraries for dense and banded linear algebra is not a viable solution due to constraints imposed by early design decisions. We propose a philosophy of abstraction and separation of concerns that provides a promising solution in this problem domain. The first abstraction, FLASH, allows algorithms to express computation with matrices consisting of contiguous blocks, facilitating algorithms-by-blocks. Operand descriptions are registered for a particular operation a priori by the library implementor. A runtime system, SuperMatrix, uses this information to identify data dependencies between suboperations, allowing them to be scheduled to threads out-of-order and executed in parallel. But not all classical algorithms in linear algebra lend themselves to conversion to algorithms-by-blocks. We show how our recently proposed LU factorization with incremental pivoting and a closely related algorithm-by-blocks for the QR factorization, both originally designed for out-of-core computation, overcome this difficulty. Anecdotal evidence regarding the development of routines with a core functionality demonstrates how the methodology supports high productivity while experimental results suggest that high performance is abundantly achievable.",
"This work revisits existing algorithms for the QR factorization of rectangular matrices composed of p-by-q tiles, where p >= q. Within this framework, we study the critical paths and performance of algorithms such as Sameh and Kuck, Modi and Clarke, Greedy, and those found within PLASMA. Although neither Modi and Clarke nor Greedy is optimal, both are shown to be asymptotically optimal for all matrices of size p = q^2 f(q), where f is any function such that + f= 0. This novel and important complexity result applies to all matrices where p and q are proportional, p = q, with >= 1, thereby encompassing many important situations in practice (least squares). We provide an extensive set of experiments that show the superiority of the new algorithms for tall matrices.",
"As multicore systems continue to gain ground in the high-performance computing world, linear algebra algorithms have to be reformulated or new algorithms have to be developed in order to take advantage of the architectural features on these new processors. Fine-grain parallelism becomes a major requirement and introduces the necessity of loose synchronization in the parallel execution of an operation. This paper presents an algorithm for the QR factorization where the operations can be represented as a sequence of small tasks that operate on square blocks of data (referred to as ‘tiles’). These tasks can be dynamically scheduled for execution based on the dependencies among them and on the availability of computational resources. This may result in an out-of-order execution of the tasks that will completely hide the presence of intrinsically sequential tasks in the factorization. Performance comparisons are presented with the LAPACK algorithm for QR factorization where parallelism can be exploited only at the level of the BLAS operations and with vendor implementations. Copyright © 2008 John Wiley & Sons, Ltd.",
"This paper describes a new QR factorization algorithm which is especially designed for massively parallel platforms combining parallel distributed nodes, where a node is a multi-core processor. These platforms represent the present and the foreseeable future of high-performance computing. Our new QR factorization algorithm falls in the category of the tile algorithms which naturally enables good data locality for the sequential kernels executed by the cores (high sequential performance), low number of messages in a parallel distributed setting (small latency term), and fine granularity (high parallelism). Each tile algorithm is uniquely characterized by its sequence of reduction trees. In the context of a cluster of nodes, in order to minimize the number of inter-processor communications (aka, ''communication-avoiding''), it is natural to consider hierarchical trees composed of an ''inter-node'' tree which acts on top of ''intra-node'' trees. At the intra-node level, we propose a hierarchical tree made of three levels: (0) ''TS level'' for cache-friendliness, (1) ''low-level'' for decoupled highly parallel inter-node reductions, (2) ''domino level'' to efficiently resolve interactions between local reductions and global reductions. Our hierarchical algorithm and its implementation are flexible and modular, and can accommodate several kernel types, different distribution layouts, and a variety of reduction trees at all levels, both inter-node and intra-node. Numerical experiments on a cluster of multi-core nodes (i) confirm that each of the four levels of our hierarchical tree contributes to build up performance and (ii) build insights on how these levels influence performance and interact within each other. Our implementation of the new algorithm with the DAGuE scheduling tool significantly outperforms currently available QR factorization software for all matrix shapes, thereby bringing a new advance in numerical linear algebra for petascale and exascale platforms."
]
}
|
1401.5522
|
2952178437
|
This paper introduces hybrid LU-QR al- gorithms for solving dense linear systems of the form Ax = b. Throughout a matrix factorization, these al- gorithms dynamically alternate LU with local pivoting and QR elimination steps, based upon some robustness criterion. LU elimination steps can be very efficiently parallelized, and are twice as cheap in terms of floating- point operations, as QR steps. However, LU steps are not necessarily stable, while QR steps are always stable. The hybrid algorithms execute a QR step when a robustness criterion detects some risk for instability, and they execute an LU step otherwise. Ideally, the choice between LU and QR steps must have a small computational overhead and must provide a satisfactory level of stability with as few QR steps as possible. In this paper, we introduce several robustness criteria and we establish upper bounds on the growth factor of the norm of the updated matrix incurred by each of these criteria. In addition, we describe the implementation of the hybrid algorithms through an exten- sion of the PaRSEC software to allow for dynamic choices during execution. Finally, we analyze both stability and performance results compared to state-of-the-art linear solvers on parallel distributed multicore platforms.
|
The reason for using LU kernels instead of QR kernels is performance: (i) LU performs half the number of of QR; (ii) LU kernels relies on GEMM kernels which are very efficient while QR kernels are more complex and much less tuned, hence not that efficient; and (iii) the LU update is much more parallel than the QR update. So all in all, LU is much faster than QR (as observed in the performance results of ). Because of the large number of communications and synchronizations induced by pivoting in the reference LUPP algorithm, variants of LUPP have been introduced @cite_12 , but they have proven much more challenging to design because of stability issues. In the following, we review several approaches:
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2157237396"
],
"abstract": [
"We present parallel and sequential dense QR factorization algorithms that are both optimal (up to polylogarithmic factors) in the amount of communication they perform and just as stable as Householder QR. We prove optimality by deriving new lower bounds for the number of multiplications done by “non-Strassen-like” QR, and using these in known communication lower bounds that are proportional to the number of multiplications. We not only show that our QR algorithms attain these lower bounds (up to polylogarithmic factors), but that existing LAPACK and ScaLAPACK algorithms perform asymptotically more communication. We derive analogous communication lower bounds for LU factorization and point out recent LU algorithms in the literature that attain at least some of these lower bounds. The sequential and parallel QR algorithms for tall and skinny matrices lead to significant speedups in practice over some of the existing algorithms, including LAPACK and ScaLAPACK, for example, up to 6.7 times over ScaLAPACK. A performance model for the parallel algorithm for general rectangular matrices predicts significant speedups over ScaLAPACK."
]
}
|
1401.5522
|
2952178437
|
This paper introduces hybrid LU-QR al- gorithms for solving dense linear systems of the form Ax = b. Throughout a matrix factorization, these al- gorithms dynamically alternate LU with local pivoting and QR elimination steps, based upon some robustness criterion. LU elimination steps can be very efficiently parallelized, and are twice as cheap in terms of floating- point operations, as QR steps. However, LU steps are not necessarily stable, while QR steps are always stable. The hybrid algorithms execute a QR step when a robustness criterion detects some risk for instability, and they execute an LU step otherwise. Ideally, the choice between LU and QR steps must have a small computational overhead and must provide a satisfactory level of stability with as few QR steps as possible. In this paper, we introduce several robustness criteria and we establish upper bounds on the growth factor of the norm of the updated matrix incurred by each of these criteria. In addition, we describe the implementation of the hybrid algorithms through an exten- sion of the PaRSEC software to allow for dynamic choices during execution. Finally, we analyze both stability and performance results compared to state-of-the-art linear solvers on parallel distributed multicore platforms.
|
@cite_6 propose to apply a random transformation to the initial matrix, in order to use LU NoPiv while maintaining stability. This approach gives about the same performance as LU NoPiv, since preprocessing and postprocessing costs are negligible. It is hard to be satisfied with this approach @cite_6 because for any matrix which is rendered stable by this approach (i.e, LU NoPiv is stable), there exists a matrix which is rendered not stable. Nevertheless, in practice, this proves to be a valid approach.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2060236641"
],
"abstract": [
"We illustrate how linear algebra calculations can be enhanced by statistical techniques in the case of a square linear system Ax = b. We study a random transformation of A that enables us to avoid pivoting and then to reduce the amount of communication. Numerical experiments show that this randomization can be performed at a very affordable computational price while providing us with a satisfying accuracy when compared to partial pivoting. This random transformation called Partial Random Butterfly Transformation (PRBT) is optimized in terms of data storage and flops count. We propose a solver where PRBT and the LU factorization with no pivoting take advantage of the current hybrid multicore GPU machines and we compare its Gflop s performance with a solver implemented in a current parallel library."
]
}
|
1401.5522
|
2952178437
|
This paper introduces hybrid LU-QR al- gorithms for solving dense linear systems of the form Ax = b. Throughout a matrix factorization, these al- gorithms dynamically alternate LU with local pivoting and QR elimination steps, based upon some robustness criterion. LU elimination steps can be very efficiently parallelized, and are twice as cheap in terms of floating- point operations, as QR steps. However, LU steps are not necessarily stable, while QR steps are always stable. The hybrid algorithms execute a QR step when a robustness criterion detects some risk for instability, and they execute an LU step otherwise. Ideally, the choice between LU and QR steps must have a small computational overhead and must provide a satisfactory level of stability with as few QR steps as possible. In this paper, we introduce several robustness criteria and we establish upper bounds on the growth factor of the norm of the updated matrix incurred by each of these criteria. In addition, we describe the implementation of the hybrid algorithms through an exten- sion of the PaRSEC software to allow for dynamic choices during execution. Finally, we analyze both stability and performance results compared to state-of-the-art linear solvers on parallel distributed multicore platforms.
|
LU IncPiv is another communication-avoiding LU algorithm @cite_14 @cite_1 . Incremental pivoting is also called pairwise pivoting . The stability of the algorithm @cite_14 is not sufficient and degrades as the number of tiles in the matrix increases (see our experimental results on random matrices). The method also suffers some of the same performance degradation of QR factorizations with multiple eliminators per panel, namely low-performing kernels, and some dependencies in the update phase.
|
{
"cite_N": [
"@cite_14",
"@cite_1"
],
"mid": [
"2168612748",
"1986834688"
],
"abstract": [
"We present a method for developing dense linear algebra algorithms that seamlessly scales to thousands of cores. It can be done with our project called DPLASMA (Distributed PLASMA) that uses a novel generic distributed Direct Acyclic Graph Engine (DAGuE). The engine has been designed for high performance computing and thus it enables scaling of tile algorithms, originating in PLASMA, on large distributed memory systems. The underlying DAGuE framework has many appealing features when considering distributed-memory platforms with heterogeneous multicore nodes: DAG representation that is independent of the problem-size, automatic extraction of the communication from the dependencies, overlapping of communication and computation, task prioritization, and architecture-aware scheduling and management of tasks. The originality of this engine lies in its capacity to translate a sequential code with nested-loops into a concise and synthetic format which can then be interpreted and executed in a distributed environment. We present three common dense linear algebra algorithms from PLASMA (Parallel Linear Algebra for Scalable Multi-core Architectures), namely: Cholesky, LU, and QR factorizations, to investigate their data driven expression and execution in a distributed system. We demonstrate through experimental results on the Cray XT5 Kraken system that our DAG-based approach has the potential to achieve sizable fraction of peak performance which is characteristic of the state-of-the-art distributed numerical software on current and emerging architectures.",
"With the emergence of thread-level parallelism as the primary means for continued performance improvement, the programmability issue has reemerged as an obstacle to the use of architectural advances. We argue that evolving legacy libraries for dense and banded linear algebra is not a viable solution due to constraints imposed by early design decisions. We propose a philosophy of abstraction and separation of concerns that provides a promising solution in this problem domain. The first abstraction, FLASH, allows algorithms to express computation with matrices consisting of contiguous blocks, facilitating algorithms-by-blocks. Operand descriptions are registered for a particular operation a priori by the library implementor. A runtime system, SuperMatrix, uses this information to identify data dependencies between suboperations, allowing them to be scheduled to threads out-of-order and executed in parallel. But not all classical algorithms in linear algebra lend themselves to conversion to algorithms-by-blocks. We show how our recently proposed LU factorization with incremental pivoting and a closely related algorithm-by-blocks for the QR factorization, both originally designed for out-of-core computation, overcome this difficulty. Anecdotal evidence regarding the development of routines with a core functionality demonstrates how the methodology supports high productivity while experimental results suggest that high performance is abundantly achievable."
]
}
|
1401.5367
|
1624672621
|
As Software Product Lines (SPLs) are becoming a more pervasive development practice, their effective testing is becoming a more important concern. In the past few years many SPL testing approaches have been proposed, among them, are those that support Combinatorial Interaction Testing (CIT) whose premise is to select a group of products where faults, due to feature interactions, are more likely to occur. Many CIT techniques for SPL testing have been put forward; however, no systematic and comprehensive comparison among them has been performed. To achieve such goal two items are important: a common benchmark of feature models, and an adequate comparison framework. In this research-in-progress paper, we propose 19 feature models as the base of a benchmark, which we apply to three different techniques in order to analyze the comparison framework proposed by We identify the shortcomings of this framework and elaborate alternatives for further study.
|
There exists substantial literature on SPL testing @cite_8 @cite_32 @cite_10 @cite_27 . However, to the best of our knowledge there are neither benchmarks nor frameworks for comparing approaches. In the area of Search-Based Software Engineering a major research focus has been software testing @cite_17 @cite_18 , where there exists a plethora of articles that compare testing algorithms using different metrics. For example, Mansour @cite_11 compare five algorithms for regression testing using eight different metrics (including quantitative and qualitative criteria). Similarly, @cite_28 compare different metrics implemented as fitness functions to solve the problem of test input generation. To the best of our knowledge, in the literature on test case generation there is no well-known comparison framework for the research and practitioner community to use. Researchers usually apply their methods to open source programs and compute some metrics directly such as the success rate, the number of test cases and performance. The closest to a common comparison framework we could trace is the work of Rothermel and Harrold @cite_6 where they propose a framework for regression testing.
|
{
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_8",
"@cite_28",
"@cite_32",
"@cite_6",
"@cite_27",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"2025003613",
"2105438268",
"2039380967",
"",
"2107932342",
"1974481336",
"",
"1977321274"
],
"abstract": [
"",
"In the maintenance phase, the regression test selection problem refers to selecting test cases from the initial suite of test cases used in the development phase. In this paper, we empirically compare five representative regression test selection algorithms, which include: Simulated Annealing, Reduction, Slicing, Dataflow, and Firewall algorithms. The comparison is based on eight quantitative and qualitative criteria. These criteria are: number of selected test cases, execution time, precision, inclusiveness, preprocessing requirements, type of maintenance, level of testing, and type of approach. The empirical results show that the five algorithms can be used for different requirements of regression testing. For example the Simulated Annealing algorithm can be used for emergency non-safety-critical maintenance situations with a large number of small modifications.",
"Context: Software product lines (SPL) are used in industry to achieve more efficient software development. However, the testing side of SPL is underdeveloped. Objective: This study aims at surveying existing research on SPL testing in order to identify useful approaches and needs for future research. Method: A systematic mapping study is launched to find as much literature as possible, and the 64 papers found are classified with respect to focus, research type and contribution type. Results: A majority of the papers are of proposal research types (64 ). System testing is the largest group with respect to research focus (40 ), followed by management (23 ). Method contributions are in majority. Conclusions: More validation and evaluation research is needed to provide a better foundation for SPL testing.",
"Evolutionary algorithms are among the metaheuristic search methods that have been applied to the structural test data generation problem. Fitness evaluation methods play an important role in the performance of evolutionary algorithms and various methods have been devised for this problem. In this paper, we propose a new fitness evaluation method based on pairwise sequence comparison also used in bioinformatics. Our preliminary study shows that this method is easy to implement and produces promising results.",
"",
"Regression testing is a necessary but expensive activity aimed at showing that code has not been adversely affected by changes. A selective approach to regression testing attempts to reuse tests from an existing test suite to test a modified program. This paper outlines issues relevant to selective retest approaches, and presents a framework within which such approaches can be evaluated. This framework is then used to evaluate and compare existing selective retest algorithms. The evaluation reveals strengths and weaknesses of existing methods, and highlights problems that future work in this area should address. >",
"The software product line engineering strategy enables the achievement of significant improvements in quality through reuse of carefully crafted software assets across multiple products. However, high levels of quality in the software product line assets, which are used to create products, must be accompanied by effective and efficient test strategies for the products in the software product line. The goal of this study is to understand which strategies for testing products in software product lines have been reported in the literature, enabling discussions on the significant issues, and also pointing out further research directions. A systematic literature review was carried out that identified two hundred seventy-three papers, published from the years 1998 and early in 2012. From such a set of papers, a systematic selection resulted in forty-one relevant papers. The analysis of the reported strategies comprised two important aspects: the selection of products for testing, and the actual test of products. The findings showed a range of strategies, dealing with both aspects, but few empirical evaluations of their effectiveness have been performed, which limits the inferences that can be drawn.",
"",
"In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This article1 provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research."
]
}
|
1401.5688
|
2052184969
|
Combining an information-theoretic approach to fingerprinting with a more constructive, statistical approach, we derive new results on the fingerprinting capacities for various informed settings, as well as new log-likelihood decoders with provable code lengths that asymptotically match these capacities. The simple decoder built against the interleaving attack is further shown to achieve the simple capacity for unknown attacks, and is argued to be an improved version of the recently proposed decoder of With this new universal decoder, cut-offs on the bias distribution function can finally be dismissed. Besides the application of these results to fingerprinting, a direct consequence of our results to group testing is that (i) a simple decoder asymptotically requires a factor 1.44 more tests to find defectives than a joint decoder, and (ii) the simple decoder presented in this paper provably achieves this bound.
|
Work on the above bias-based fingerprinting game started in 2003, when Tardos proved that any fingerprinting scheme must satisfy @math , and that a bias-based scheme is able to achieve this optimal scaling in @math @cite_7 . He proved the latter by providing a simple and explicit construction with a code length of @math , which is known in the literature as the Tardos scheme.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2031722321"
],
"abstract": [
"We construct binary codes for fingerprinting digital documents. Our codes for n users that are e-secure against c pirates have length O(c2log(n e)). This improves the codes proposed by Boneh and Shaw l1998r whose length is approximately the square of this length. The improvement carries over to works using the Boneh--Shaw code as a primitive, for example, to the dynamic traitor tracing scheme of Tassa l2005r. By proving matching lower bounds we establish that the length of our codes is best within a constant factor for reasonable error probabilities. This lower bound generalizes the bound found independently by l2003r that applies to a limited class of codes. Our results also imply that randomized fingerprint codes over a binary alphabet are as powerful as over an arbitrary alphabet and the equal strength of two distinct models for fingerprinting."
]
}
|
1401.5688
|
2052184969
|
Combining an information-theoretic approach to fingerprinting with a more constructive, statistical approach, we derive new results on the fingerprinting capacities for various informed settings, as well as new log-likelihood decoders with provable code lengths that asymptotically match these capacities. The simple decoder built against the interleaving attack is further shown to achieve the simple capacity for unknown attacks, and is argued to be an improved version of the recently proposed decoder of With this new universal decoder, cut-offs on the bias distribution function can finally be dismissed. Besides the application of these results to fingerprinting, a direct consequence of our results to group testing is that (i) a simple decoder asymptotically requires a factor 1.44 more tests to find defectives than a joint decoder, and (ii) the simple decoder presented in this paper provably achieves this bound.
|
Later work on the constructive side of fingerprinting focused on improving upon Tardos' result by sharpening the bounds @cite_10 @cite_16 , optimizing the distribution functions @cite_12 , improving the score function @cite_27 , tightening the bounds again with this improved score function @cite_4 @cite_26 @cite_17 @cite_40 @cite_20 @cite_43 , optimizing the score function @cite_39 , and again tightening the bounds with this optimized score function @cite_23 @cite_22 to finally end up with a sufficient asymptotic code length of @math for large @math . This construction can be extended to larger alphabets, in which case the code length scales as @math . Other work on practical constructions focused on joint decoders, which are computationally more involved but may work with shorter codes @cite_29 @cite_6 @cite_42 , and side-informed fingerprinting games @cite_31 @cite_45 @cite_32 @cite_0 , where estimating the collusion channel @math was considered to get an improved performance.
|
{
"cite_N": [
"@cite_22",
"@cite_29",
"@cite_42",
"@cite_43",
"@cite_10",
"@cite_20",
"@cite_4",
"@cite_39",
"@cite_23",
"@cite_17",
"@cite_26",
"@cite_32",
"@cite_6",
"@cite_27",
"@cite_40",
"@cite_16",
"@cite_12",
"@cite_0",
"@cite_45",
"@cite_31"
],
"mid": [
"",
"2092811336",
"",
"",
"2087879325",
"",
"2168868689",
"2129115163",
"1969848549",
"",
"",
"",
"",
"1977832049",
"",
"",
"1541113101",
"",
"",
"2010291444"
],
"abstract": [
"",
"The class of joint decoder in fingerprinting codes is of utmost importance in theoretical papers to establish the concept of fingerprint capacity. However, no implementation supporting a large user base is known to date. This paper presents an iterative decoder which is the first attempt toward practical large-scale joint decoding. The discriminative feature of the scores benefits on one hand from the side-information of previously found users, and on the other hand, from recently introduced universal linear decoders for compound channels. Neither the code construction nor the decoder makes assumptions about the collusion size and strategy, provided it is a memoryless and fair attack. The extension to incorporate soft outputs from the watermarking layer is straightforward. An extensive experimental work benchmarks the very good performance and offers a clear comparison with previous state-of-the-art decoders.",
"",
"",
"We study the Tardos' probabilistic fingerprinting scheme and show that its codeword length may be shortened by a factor of approximately 4. We achieve this by retracing Tardos' analysis of the scheme and extracting from it all constants that were arbitrarily selected. We replace those constants with parameters and derive a set of inequalities that those parameters must satisfy so that the desired security properties of the scheme still hold. Then we look for a solution of those inequalities in which the parameter that governs the codeword length is minimal. A further reduction in the codeword length is achieved by decoupling the error probability of falsely accusing innocent users from the error probability of missing all colluding pirates. Finally, we simulate the Tardos scheme and show that, in practice, one may use codewords that are shorter than those in the original Tardos scheme by a factor of at least 16.",
"",
"The Tardos scheme is a well-known traitor tracing scheme to protect copyrighted content against collusion attacks. The original scheme contained some suboptimal design choices, such as the score function and the distribution function used for generating the biases. previously showed that a symbol-symmetric score function leads to shorter codes, while obtained the optimal distribution functions for arbitrary coalition sizes. Later, showed that combining these results leads to even shorter codes when the coalition size is small. We extend their analysis to the case of large coalitions and prove that these optimal distributions converge to the arcsine distribution, thus showing that the arcsine distribution is asymptotically optimal in the symmetric Tardos scheme. We also present a new, practical alternative to the discrete distributions of and give a comparison of the estimated lengths of the fingerprinting codes for each of these distributions.",
"We investigate alternative suspicion functions for Tardos traitor tracing schemes. In the simple decoder approach (computation of a score for every user independently) we derive suspicion functions that optimize a performance indicator related to the sufficient code length l in the limit of large coalition size c. Our results hold for the Restricted-Digit Model as well as the Combined-Digit Model. The scores depend on information that is usually not available to the tracer -- the attack strategy or the tallies of the symbols received by the colluders. We discuss how such results can be used in realistic contexts. We study several combinations of coalition attack strategy versus suspicion function optimized against some attack (another attack or the same). In many of these combinations the usual scaling l c2 is replaced by a lower power of c, e.g. c3 2. We find that the interleaving strategy is an especially powerful attack, and the suspicion function tailored against interleaving is effective against all considered attacks.",
"We study the asymptotic-capacity-achieving score function that was recently proposed by for bias-based traitor tracing codes. For the bias function, we choose the Dirichlet distribution with a cutoff. Using Bernstein's inequality and Bennett's inequality, we upper bound the false-positive and false-negative error probabilities. From these bounds we derive sufficient conditions for the scheme parameters. We solve these conditions in the limit of large coalition size c0 and obtain asymptotic solutions for the cutoff, the sufficient code length, and the corresponding accusation threshold. We find that the code length converges to its asymptote approximately as c0?1 2 @math , which is faster than the c0?1 3 @math of Tardos' score function. MSC:94B60",
"",
"",
"",
"",
"Fingerprinting provides a means of tracing unauthorized redistribution of digital data by individually marking each authorized copy with a personalized serial number. In order to prevent a group of users from collectively escaping identification, collusion-secure fingerprinting codes have been proposed. In this paper, we introduce a new construction of a collusion-secure fingerprinting code which is similar to a recent construction by Tardos but achieves shorter code lengths and allows for codes over arbitrary alphabets. We present results for symmetric' coalition strategies. For binary alphabets and a false accusation probability @math , a code length of @math symbols is provably sufficient, for large c 0, to withstand collusion attacks of up to c 0 colluders. This improves Tardos' construction by a factor of 10. Furthermore, invoking the Central Limit Theorem in the case of sufficiently large c 0, we show that even a code length of @math is adequate. Assuming the restricted digit model, the code length can be further reduced by moving from a binary alphabet to a q-ary alphabet. Numerical results show that a reduction of 35 is achievable for q = 3 and 80 for q = 10.",
"",
"",
"It is known that Tardos's collusion-secure probabilistic fingerprinting code (Tardos code) has length of theoretically minimal order. However, Tardos code uses certain continuous probability distribution, which causes that huge amount of extra memory is required in a practical use. An essential solution is to replace the continuous distributions with finite discrete ones, preserving the security. In this paper, we determine the optimal finite distribution for the purpose of reducing memory amount; the required extra memory is reduced to less than 1 32 of the original in some practical setting. Moreover, the code length is also reduced (to, asymptotically, about 20.6 of Tardos code), and some further practical problems such as approximation errors are also considered.",
"",
"",
"This paper presents our recent works on multimedia fingerprinting, which consists in improving both the fingerprinting code and the watermarking scheme. The fingerprinting code is the well known Tardos code. Our contributions only focus on deriving a better accusation process. It appears that Tardos orginal decoding is very conservative: its performances are guaranteed whatever the collusion strategy. Indeed, major improvements stems from the knowledge of the collusion strategy. Therefore, this paper investigates whether it is possible to learn and adapt to the collusion strategy. This done with an iterative algorithm a la EM, where a better estimation of their strategy yields a better tracing of the colluders, which in return yields a better estimation of their strategy etc. The second part focuses on the multimedia watermarking scheme. In a previous paper, we already used the Broken Arrows' technique as the watermarking layer for multimedia fingerprinting. However, a recent paper from A. Westfeld discloses a flaw in this technique. We just present a counter-measure which blocks this security hole while preserving the robustness of the original technique."
]
}
|
1401.5688
|
2052184969
|
Combining an information-theoretic approach to fingerprinting with a more constructive, statistical approach, we derive new results on the fingerprinting capacities for various informed settings, as well as new log-likelihood decoders with provable code lengths that asymptotically match these capacities. The simple decoder built against the interleaving attack is further shown to achieve the simple capacity for unknown attacks, and is argued to be an improved version of the recently proposed decoder of With this new universal decoder, cut-offs on the bias distribution function can finally be dismissed. Besides the application of these results to fingerprinting, a direct consequence of our results to group testing is that (i) a simple decoder asymptotically requires a factor 1.44 more tests to find defectives than a joint decoder, and (ii) the simple decoder presented in this paper provably achieves this bound.
|
Recently Abbe and Zheng @cite_18 showed that, in the context of fingerprinting @cite_29 , if the set of allowed collusion channels satisfies a certain one-sidedness condition, then a decoder that achieves capacity against the information-theoretic worst-case attack is a universal decoder achieving capacity against arbitrary attacks. The main drawback of using this result is that the worst-case attack is hard to compute, but this does lead to more insight why e.g. @cite_37 obtained a universal decoder by considering the decoder against the interleaving attack', which is known to be the asymptotic worst-case attack.
|
{
"cite_N": [
"@cite_18",
"@cite_29",
"@cite_37"
],
"mid": [
"2134864073",
"2092811336",
"1986979839"
],
"abstract": [
"Over discrete memoryless channels (DMC), linear decoders (maximizing additive metrics) afford several nice properties. In particular, if suitable encoders are employed, the use of decoding algorithms with manageable complexities is permitted. For a compound DMC, decoders that perform well without the channel's knowledge are required in order to achieve capacity. Several such decoders have been studied in the literature, however, there is no such known decoder which is linear. Hence, the problem of finding linear decoders achieving capacity for compound DMC is addressed, and it is shown that under minor concessions, such decoders exist and can be constructed. A geometric method based on the very noisy transformation is developed and used to solve this problem.",
"The class of joint decoder in fingerprinting codes is of utmost importance in theoretical papers to establish the concept of fingerprint capacity. However, no implementation supporting a large user base is known to date. This paper presents an iterative decoder which is the first attempt toward practical large-scale joint decoding. The discriminative feature of the scores benefits on one hand from the side-information of previously found users, and on the other hand, from recently introduced universal linear decoders for compound channels. Neither the code construction nor the decoder makes assumptions about the collusion size and strategy, provided it is a memoryless and fair attack. The extension to incorporate soft outputs from the watermarking layer is straightforward. An extensive experimental work benchmarks the very good performance and offers a clear comparison with previous state-of-the-art decoders.",
"We investigate alternative suspicion functions for bias-based traitor tracing schemes, and present a practical construction of a simple decoder that attains capacity in the limit of large coalition size @math . We derive optimal suspicion functions in both the restricted-digit model and the combined-digit model. These functions depend on information that is usually not available to the tracer—the attack strategy or the tallies of the symbols received by the colluders. We discuss how such results can be used in realistic contexts. We study several combinations of coalition attack strategy versus suspicion function optimized against some attack (another attack or the same). In many of these combinations, the usual codelength scaling @math changes to a lower power of @math , e.g., @math . We find that the interleaving strategy is an especially powerful attack. The suspicion function tailored against interleaving is the key ingredient of the capacity-achieving construction."
]
}
|
1401.5688
|
2052184969
|
Combining an information-theoretic approach to fingerprinting with a more constructive, statistical approach, we derive new results on the fingerprinting capacities for various informed settings, as well as new log-likelihood decoders with provable code lengths that asymptotically match these capacities. The simple decoder built against the interleaving attack is further shown to achieve the simple capacity for unknown attacks, and is argued to be an improved version of the recently proposed decoder of With this new universal decoder, cut-offs on the bias distribution function can finally be dismissed. Besides the application of these results to fingerprinting, a direct consequence of our results to group testing is that (i) a simple decoder asymptotically requires a factor 1.44 more tests to find defectives than a joint decoder, and (ii) the simple decoder presented in this paper provably achieves this bound.
|
Finally, a different area of research closely related to fingerprinting is that of group testing, where the set of @math users corresponds to a set of @math items, the set of @math colluders corresponds to a subset of @math defective items, and where the aim of the distributor is to find all defective items by performing group tests. This game corresponds to a special case of the fingerprinting game, where the pirate attack is fixed in advance (and possibly known to the distributor) to (a variant of) the all- @math attack'. In this game it is significantly easier to find all pirates defectives; it is known that a joint decoder asymptotically requires only @math tests @cite_1 , while simple decoders exist requiring as few as @math tests to find all defectives @cite_21 . Recent work has shown that applying results from fingerprinting to group testing may lead to improved results compared to what is known in the group testing literature @cite_30 @cite_14 .
|
{
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_21",
"@cite_1"
],
"mid": [
"1996446882",
"",
"2155658392",
"2159716457"
],
"abstract": [
"Inspired by recent results from collusion-resistant traitor tracing, we provide a framework for constructing efficient probabilistic group testing schemes. In the traditional group testing model, our scheme asymptotically requires T 2K ln N tests to find (with high probability) the correct set of K defectives out of N items. The framework is also applied to several noisy group testing and threshold group testing models, often leading to improvements over previously known results, but we emphasize that this framework can be applied to other variants of the classical model as well, both in adaptive and in non-adaptive settings.",
"",
"We present computationally efficient and provably correct algorithms with near-optimal sample-complexity for noisy non-adaptive group testing. Group testing involves grouping arbitrary subsets of items into pools. Each pool is then tested to identify the defective items, which are usually assumed to be sparsely distributed. We consider random non-adaptive pooling where pools are selected randomly and independently of the test outcomes. Our noisy scenario accounts for both false negatives and false positives for the test outcomes. Inspired by compressive sensing algorithms we introduce four novel computationally efficient decoding algorithms for group testing, CBP via Linear Programming (CBP-LP), NCBP-LP (Noisy CBP-LP), and the two related algorithms NCBP-SLP+ and NCBP-SLP- (“Simple” NCBP-LP). The first of these algorithms deals with the noiseless measurement scenario, and the next three with the noisy measurement scenario. We derive explicit sample-complexity bounds — with all constants made explicit — for these algorithms as a function of the desired error probability; the noise parameters; the number of items; and the size of the defective set (or an upper bound on it). We show that the sample-complexities of our algorithms are near-optimal with respect to known information-theoretic bounds.",
"Abstract The paper is concerned with static search on a finite set. An unknown subset of cardinality k of the finite set is to be found by testing its subsets. We investigate two problems: in the first, the number of common elements of the tested and the unknown subset is given; in the second, only the information whether the tested and the unknown subset are disjoint or not is given. Both problems correspond to problems on false coins. If the unknown subset is taken from the family of k-element sets with uniform distribution, we determine the minimum of the lengths of the strategies that find the unknown element with small error probability. The strategies are constructed by probabilistic means."
]
}
|
1401.4716
|
2399386836
|
AbstractWith the rapid development of Cloud computing technologies and wideadopt of Cloud services and applications, QoS provisioning in Clouds be-comes an important research topic. In this paper, we propose an admis-sion control mechanism for Cloud computing. In particular we considerthe high volume of simultaneous requests for Cloud services and developadmission control for aggregated trac ows to address this challenge. Byemploy network calculus, we determine e ective bandwidth for aggregateow, which is used for making admission control decision. In order toimprove network resource allocation while achieving Cloud service QoS,we investigate the relationship between e ective bandwidth and equiva-lent capacity. We have also conducted extensive experiments to evaluateperformance of the proposed admission control mechanism. 1 Introduction Recently the emerging Cloud computing has been developing very quickly[1, 2, 3]. With the rapid development of Cloud computing technologies and wideadoption of Cloud-based applications, the huge amount of trac generated bya large number of users for accessing Cloud services bring in a series challengeto the Internet. The best-e ort service model in the current Internet cannotmeet users’ requirements for Quality of Service (QoS). Call Admission Control(CAC) o ers an e ective approach to controlling network trac and avoiding1
|
Network calculus was initially invented by Chang @cite_8 and Cruz @cite_5 @cite_13 and then further developed by other researchers (e.g., @cite_15 @cite_0 @cite_16 @cite_3 ) into an effective quantitative tool for analyzing network performance. Network calculus uses arrival curve and service curve to determine some key QoS factors of networking systems such as delay and backlog @cite_14 @cite_11 . Compared to traditional queuing analysis methods, network calculus can provide performance bounds for networking systems to obtain work-case performance, which allows strict QoS guarantee @cite_9 @cite_17 . Network calculus has been widely applied in network performance evaluation, through which tight performance bounds can be obtained for making admission decisions @cite_7 @cite_4 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_17",
"@cite_3",
"@cite_0",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"2020634547",
"",
"2145806741",
"1978905175",
"2000292643",
"",
"",
"",
"2155303087",
"2176566884",
"",
"",
""
],
"abstract": [
"ACM Sigcomm 2006 published a paper [26] which was perceived to unify the deterministic and stochastic branches of the network calculus (abbreviated throughout as DNC and SNC) [39]. Unfortunately, this seemingly fundamental unification---which has raised the hope of a straightforward transfer of all results from DNC to SNC---is invalid. To substantiate this claim, we demonstrate that for the class of stationary and ergodic processes, which is prevalent in traffic modelling, the probabilistic arrival model from [26] is quasi-deterministic, i.e., the underlying probabilities are either zero or one. Thus, the probabilistic framework from [26] is unable to account for statistical multiplexing gain, which is in fact the raison d'etre of packet-switched networks. Other previous formulations of SNC can capture statistical multiplexing gain, yet require additional assumptions [12], [22] or are more involved [14], [9] [28], and do not allow for a straightforward transfer of results from DNC. So, in essence, there is no free lunch in this endeavor. Our intention in this paper is to go beyond presenting a negative result by providing a comprehensive perspective on network calculus. To that end, we attempt to illustrate the fundamental concepts and features of network calculus in a systematic way, and also to rigorously clarify some key facts as well as misconceptions. We touch in particular on the relationship between linear systems, classical queueing theory, and network calculus, and on the lingering issue of tightness of network calculus bounds. We give a rigorous result illustrating that the statistical multiplexing gain scales as Ω(√N), as long as some small violations of system performance constraints are tolerable. This demonstrates that the network calculus can capture actual system behavior tightly when applied carefully. Thus, we positively conclude that it still holds promise as a valuable systematic methodology for the performance analysis of computer and communication systems, though the unification of DNC and SNC remains an open, yet quite elusive task.",
"",
"We present an admission control scheme for real-time VBR traffic. The deterministic guarantee on the end-to-end delay bound is adopted as the measure of quality-of-service (QoS). The network environment of our admission control algorithm is the connection-oriented network, in which every network node uses the rate-controlled service (RCS) discipline based on the earliest deadline first (EDF) scheduling policy. Unlike the previous studies, this paper focuses on designing an efficient bandwidth allocation scheme, in which the service curve and the spare capacity curve are two important terminologies employed. Experiments using the parameters derived from some real video traces show that our algorithm performs better than previous ones in terms of both the network bandwidth utilization and the computational time needed.",
"From the Publisher: Providing performance guarantees is one of the most important issues for future telecommunication networks. This book describes theoretical developments in performance guarantees for telecommunication networks from the last decade. Written for the benefit of graduate students and scientists interested in telecommunications-network performance this book consists of two parts.",
"This paper addresses the problem of computing end-to-end delay bounds for a traffic flow traversing a tandem of FIFO multiplexing network nodes using Network Calculus. Numerical solution methods are required, as closed-form delay bound expressions are unknown except for few specific cases. For the methodology called the Least Upper Delay Bound, the most accurate among those based on Network Calculus, exact and approximate solution algorithms are presented, and their accuracy and computation cost are discussed. The algorithms are inherently exponential, yet affordable for tandems of up to few tens of nodes, and amenable to online execution in cases of practical significance. This complexity is, however, required to compute accurate bounds. As the LUDB may actually be larger than the worst-case delay, we assess how close the former is to the latter by computing lower bounds on the worst-case delay and measuring the gap between the lower and upper bound.",
"",
"",
"",
"A calculus is developed for obtaining bounds on delay and buffering requirements in a communication network operating in a packet switched mode under a fixed routing strategy. The theory developed is different from traditional approaches to analyzing delay because the model used to describe the entry of data into the network is nonprobabilistic. It is supposed that the data stream entered into the network by any given user satisfies burstiness constraints. A data stream is said to satisfy a burstiness constraint if the quantity of data from the stream contained in any interval of time is less than a value that depends on the length of the interval. Several network elements are defined that can be used as building blocks to model a wide variety of communication networks. Each type of network element is analyzed by assuming that the traffic entering it satisfies bursting constraints. Under this assumption, bounds are obtained on delay and buffering requirements for the network element; burstiness constraints satisfied by the traffic that exits the element are derived. >",
"Worst-case bounds on delay and backlog are derived for leaky bucket constrained sessions in arbitrary topology networks of generalized processor sharing (GPS) servers. The inherent flexibility of the service discipline is exploited to analyze broad classes of networks. When only a subset of the sessions are leaky bucket constrained, we give succinct per-session bounds that are independent of the behavior of the other sessions and also of the network topology. However, these bounds are only shown to hold for each session that is guaranteed a backlog clearing rate that exceeds the token arrival rate of its leaky bucket. A much broader class of networks, called consistent relative session treatment (CRST) networks is analyzed for the case in which all of the sessions are leaky bucket constrained. First, an algorithm is presented that characterizes the internal traffic in terms of average rate and burstiness, and it is shown that all CRST networks are stable. Next, a method is presented that yields bounds on session delay and backlog given this internal traffic characterization. The links of a route are treated collectively, yielding tighter bounds than those that result from adding the worst-case delays (backlogs) at each of the links in the route. The bounds on delay and backlog for each session are efficiently computed from a universal service curve, and it is shown that these bounds are achieved by \"staggered\" greedy regimes when an independent sessions relaxation holds. Propagation delay is also incorporated into the model. Finally, the analysis of arbitrary topology GPS networks is related to Packet GPS networks (PGPS). The PGPS scheme was first proposed by Demers, Shenker and Keshav (1991) under the name of weighted fair queueing. For small packet sizes, the behavior of the two schemes is seen to be virtually identical, and the effectiveness of PGPS in guaranteeing worst-case session delay is demonstrated under certain assignments. >",
"",
"",
""
]
}
|
1401.4716
|
2399386836
|
AbstractWith the rapid development of Cloud computing technologies and wideadopt of Cloud services and applications, QoS provisioning in Clouds be-comes an important research topic. In this paper, we propose an admis-sion control mechanism for Cloud computing. In particular we considerthe high volume of simultaneous requests for Cloud services and developadmission control for aggregated trac ows to address this challenge. Byemploy network calculus, we determine e ective bandwidth for aggregateow, which is used for making admission control decision. In order toimprove network resource allocation while achieving Cloud service QoS,we investigate the relationship between e ective bandwidth and equiva-lent capacity. We have also conducted extensive experiments to evaluateperformance of the proposed admission control mechanism. 1 Introduction Recently the emerging Cloud computing has been developing very quickly[1, 2, 3]. With the rapid development of Cloud computing technologies and wideadoption of Cloud-based applications, the huge amount of trac generated bya large number of users for accessing Cloud services bring in a series challengeto the Internet. The best-e ort service model in the current Internet cannotmeet users’ requirements for Quality of Service (QoS). Call Admission Control(CAC) o ers an e ective approach to controlling network trac and avoiding1
|
Cloud admission control has started attracting more attention of the research community @cite_6 @cite_10 . @cite_6 and his colleagues proposed a single flow-based admission control method for Cloud services. However, wide the rapid development of Cloud computing a large number of users may send service requests in parallel. Therefore, single flow-based admission control limits the scalability of Cloud service provisioning. Le @cite_2 proposed the concepts of delay-based Effective Bandwidth (EB) and backlog-based Equivalent Capacity (EC), which can be used for network call admission control. However, application of EB and EC in Cloud admission control is still an open issue.
|
{
"cite_N": [
"@cite_10",
"@cite_6",
"@cite_2"
],
"mid": [
"",
"2038154299",
"2147206873"
],
"abstract": [
"",
"This paper presents a novel approach for stream-based admission control and job scheduling for video transcoding called SBACS (Stream-Based Admission Control and Scheduling). SBACS uses queue waiting time of transcoding servers to make admission control decisions for incoming video streams. It implements stream-based admission control with per stream admission. To ensure efficient utilization of the transcoding servers, video streams are segmented at the Group of Pictures level. In addition to the traditional rejection policy, SBACS also provides a stream deferment policy, which exploits cloud elasticity to allow temporary deferment of the incoming video streams. In other words, the admission controller can decide to admit, defer, or reject an incoming stream and hence reduce rejection rate. In order to prevent transcoding jitters in the admitted streams, we introduce a job scheduling mechanism, which drops a small proportion of video frames from a video segment to ensure continued delivery of video contents to the user. The approach is demonstrated in a discrete-event simulation with a series of experiments involving different load patterns and stream arrival rates.",
"Network Calculus.- Application of Network Calculus to the Internet.- Basic Min-plus and Max-plus Calculus.- Min-plus and Max-plus System Theory.- Optimal Multimedia Smoothing.- FIFO Systems and Aggregate Scheduling.- Adaptive and Packet Scale Rate Guarantees.- Time Varying Shapers.- Systems with Losses."
]
}
|
1401.5292
|
1864471247
|
We introduce a fully automated static analysis that takes a sequential Java bytecode program P as input and attempts to prove that there exists an infinite execution of P. The technique consists in compiling P into a constraint logic program P_CLP and in proving non-termination of P_CLP; when P consists of instructions that are exactly compiled into constraints, the non-termination of P_CLP entails that of P. Our approach can handle method calls; to the best of our knowledge, it is the first static approach for Java bytecode able to prove the existence of infinite recursions. We have implemented our technique inside the Julia analyser. We have compared the results of Julia on a set of 113 programs with those provided by AProVE and Invel, the only freely usable non-termination analysers comparable to ours that we are aware of. Only Julia could detect non-termination due to infinite recursion.
|
To the best of our knowledge, only @cite_1 @cite_20 @cite_7 introduce methods and implementations that are directly comparable to the results of this paper.
|
{
"cite_N": [
"@cite_1",
"@cite_7",
"@cite_20"
],
"mid": [
"2147263168",
"1503406871",
""
],
"abstract": [
"Recently, we developed an approach for automated termination proofs of Java Bytecode (JBC), which is based on constructing and analyzing termination graphs. These graphs represent all possible program executions in a finite way. In this paper, we show that this approach can also be used to detect non-termination or NullPointerExceptions. Our approach automatically generates witnesses, i.e., calling the program with these witness arguments indeed leads to non-termination resp. to a NullPointerException. Thus, we never obtain \"false positives\". We implemented our results in the termination prover AProVE and provide experimental evidence for the power of our approach.",
"While termination checking tailored to real-world library code or frameworks has received ever-increasing attention during the last years, the complementary question of disproving termination properties as a means of debugging has largely been ignored so far. We present an approach to automatic non-termination checking that relates to termination checking in the same way as symbolic testing does to program verification. Our method is based on the automated generation of invariants that show that terminating states of a program are unreachable from certain initial states. Such initial states are identified using constraint-solving techniques. The method is fully implemented on top of a program verification system and available for download. We give an empirical evaluation of the approach using a collection of non-terminating example programs.",
""
]
}
|
1401.5292
|
1864471247
|
We introduce a fully automated static analysis that takes a sequential Java bytecode program P as input and attempts to prove that there exists an infinite execution of P. The technique consists in compiling P into a constraint logic program P_CLP and in proving non-termination of P_CLP; when P consists of instructions that are exactly compiled into constraints, the non-termination of P_CLP entails that of P. Our approach can handle method calls; to the best of our knowledge, it is the first static approach for Java bytecode able to prove the existence of infinite recursions. We have implemented our technique inside the Julia analyser. We have compared the results of Julia on a set of 113 programs with those provided by AProVE and Invel, the only freely usable non-termination analysers comparable to ours that we are aware of. Only Julia could detect non-termination due to infinite recursion.
|
@cite_7 , the authors consider a simple while-language that is used to describe programs as logical formul . The non-termination of the program @math under analysis is expressed as a logical formula involving the description of @math . The method then consists in proving that the non-termination formula is true by constructing a proof tree using a Gentzen-style sequent calculus. The rule of the sequent calculus corresponding to the while instruction uses invariants, that have to be generated by an external method. Hence, @cite_7 introduces several techniques for creating and for scoring the invariants according to their probable usefulness; useless invariants are discarded (invariant filtering). The generated invariants are stored inside a queue ordered by the scores. The algorithms described in @cite_7 have been implemented inside the Invel non-termination analyser for Java programs @cite_16 . Invel uses the KeY @cite_33 theorem prover for constructing proof trees. As far as we know, it was the first tool for automatically proving non-termination of imperative programs.
|
{
"cite_N": [
"@cite_16",
"@cite_33",
"@cite_7"
],
"mid": [
"",
"1537084112",
"1503406871"
],
"abstract": [
"",
"The ultimate goal of program verification is not the theory behind the tools or the tools themselves, but the application of the theory and tools in the software engineering process. Our society relies on the correctness of a vast and growing amount of software. Improving the software engineering process is an important, long-term goal with many steps. Two of those steps are the KeY tool and this KeY book. The material is presented on an advanced level suitable for graduate courses and, of course, active researchers with an interest in verification. The underlying verification paradigm is deductive verification in an expressive program logic. The logic used for reasoning about programs is not a minimalist version suitable for theoretical investigations, but an industrial-strength version. The first-order part is equipped with a type system for modelling of object hierarchies, with underspecification, and with various built-in theories. The program logic covers full Java Card (plus a bit more such as multi-dimensional arrays, characters, and long integers). A lot of emphasis is thereby put on specification, including two widely-used object-oriented specification languages (OCL and JML) and even an interface to natural language generation. The generation of proof obligations from specified code is discussed at length. The book is rounded off by two substantial case studies that are included and presented in detail.",
"While termination checking tailored to real-world library code or frameworks has received ever-increasing attention during the last years, the complementary question of disproving termination properties as a means of debugging has largely been ignored so far. We present an approach to automatic non-termination checking that relates to termination checking in the same way as symbolic testing does to program verification. Our method is based on the automated generation of invariants that show that terminating states of a program are unreachable from certain initial states. Such initial states are identified using constraint-solving techniques. The method is fully implemented on top of a program verification system and available for download. We give an empirical evaluation of the approach using a collection of non-terminating example programs."
]
}
|
1401.5292
|
1864471247
|
We introduce a fully automated static analysis that takes a sequential Java bytecode program P as input and attempts to prove that there exists an infinite execution of P. The technique consists in compiling P into a constraint logic program P_CLP and in proving non-termination of P_CLP; when P consists of instructions that are exactly compiled into constraints, the non-termination of P_CLP entails that of P. Our approach can handle method calls; to the best of our knowledge, it is the first static approach for Java bytecode able to prove the existence of infinite recursions. We have implemented our technique inside the Julia analyser. We have compared the results of Julia on a set of 113 programs with those provided by AProVE and Invel, the only freely usable non-termination analysers comparable to ours that we are aware of. Only Julia could detect non-termination due to infinite recursion.
|
Finally, a major difference between our approach and that of @cite_1 @cite_7 is that we are able to detect non-termination due to infinite recursion, whereas @cite_1 @cite_7 are not. Our experiments illustrate this consideration very clearly. Note that the approach in @cite_20 can deal with non-terminating recursion.
|
{
"cite_N": [
"@cite_20",
"@cite_1",
"@cite_7"
],
"mid": [
"",
"2147263168",
"1503406871"
],
"abstract": [
"",
"Recently, we developed an approach for automated termination proofs of Java Bytecode (JBC), which is based on constructing and analyzing termination graphs. These graphs represent all possible program executions in a finite way. In this paper, we show that this approach can also be used to detect non-termination or NullPointerExceptions. Our approach automatically generates witnesses, i.e., calling the program with these witness arguments indeed leads to non-termination resp. to a NullPointerException. Thus, we never obtain \"false positives\". We implemented our results in the termination prover AProVE and provide experimental evidence for the power of our approach.",
"While termination checking tailored to real-world library code or frameworks has received ever-increasing attention during the last years, the complementary question of disproving termination properties as a means of debugging has largely been ignored so far. We present an approach to automatic non-termination checking that relates to termination checking in the same way as symbolic testing does to program verification. Our method is based on the automated generation of invariants that show that terminating states of a program are unreachable from certain initial states. Such initial states are identified using constraint-solving techniques. The method is fully implemented on top of a program verification system and available for download. We give an empirical evaluation of the approach using a collection of non-terminating example programs."
]
}
|
1401.5174
|
2949568263
|
In conventional HTTP-based adaptive streaming (HAS), a video source is encoded at multiple levels of constant bitrate representations, and a client makes its representation selections according to the measured network bandwidth. While greatly simplifying adaptation to the varying network conditions, this strategy is not the best for optimizing the video quality experienced by end users. Quality fluctuation can be reduced if the natural variability of video content is taken into consideration. In this work, we study the design of a client rate adaptation algorithm to yield consistent video quality. We assume that clients have visibility into incoming video within a finite horizon. We also take advantage of the client-side video buffer, by using it as a breathing room for not only network bandwidth variability, but also video bitrate variability. The challenge, however, lies in how to balance these two variabilities to yield consistent video quality without risking a buffer underrun. We propose an optimization solution that uses an online algorithm to adapt the video bitrate step-by-step, while applying dynamic programming at each step. We incorporate our solution into PANDA -- a practical rate adaptation algorithm designed for HAS deployment at scale.
|
The literature on video streaming techniques with quality optimization can be roughly categorized into two eras -- the pre-HAS era and the post-HAS era. Early works (e.g., @cite_16 ) on video streaming assume generic lossy transmission channel. For video streaming over packetized (e.g., IP) networks, before the emergence of HAS, a common wisdom is to lay it on top of lossy RTP UDP to take advantage of the error-resilient nature of video (e.g., @cite_10 ) and apply error control as necessary. Thus, a common theme in these works is to deal with quality degradation caused by packet losses.
|
{
"cite_N": [
"@cite_16",
"@cite_10"
],
"mid": [
"2130660429",
"2166615620"
],
"abstract": [
"A theoretical analysis of the overall mean squared error (MSE) in hybrid video coding is presented for the case of error prone transmission. Our model covers the complete transmission system including the rate-distortion performance of the video encoder, forward error correction, interleaving, and the effect of error concealment and interframe error propagation at the video decoder. The channel model used is a 2-state Markov model describing burst errors on the symbol level. Reed-Solomon codes are used for forward error correction. Extensive simulation results using an H.263 video codec are provided for verification. Using the model, the optimal tradeoff between INTRA and INTER coding as well as the optimal channel code rate can be determined for given channel parameters by minimizing the expected MSE at the decoder. The main focus of this paper is to show the accuracy of the derived analytical model and its applicability to the analysis and optimization of an entire video transmission system.",
"This paper addresses the problem of streaming packetized media over a lossy packet network in a rate-distortion optimized way. We show that although the data units in a media presentation generally depend on each other according to a directed acyclic graph, the problem of rate-distortion optimized streaming of an entire presentation can be reduced to the problem of error-cost optimized transmission of an isolated data unit. We show how to solve the latter problem in a variety of scenarios, including the important common scenario of sender-driven streaming with feedback over a best-effort network, which we couch in the framework of Markov decision processes. We derive a fast practical algorithm for nearly optimal streaming in this scenario, and we derive a general purpose iterative descent algorithm for locally optimal streaming in arbitrary scenarios. Experimental results show that systems based on our algorithms have steady-state gains of 2-6 dB or more over systems that are not rate-distortion optimized. Furthermore, our systems essentially achieve the best possible performance: the operational distortion-rate function of the source at the capacity of the packet erasure channel."
]
}
|
1401.5174
|
2949568263
|
In conventional HTTP-based adaptive streaming (HAS), a video source is encoded at multiple levels of constant bitrate representations, and a client makes its representation selections according to the measured network bandwidth. While greatly simplifying adaptation to the varying network conditions, this strategy is not the best for optimizing the video quality experienced by end users. Quality fluctuation can be reduced if the natural variability of video content is taken into consideration. In this work, we study the design of a client rate adaptation algorithm to yield consistent video quality. We assume that clients have visibility into incoming video within a finite horizon. We also take advantage of the client-side video buffer, by using it as a breathing room for not only network bandwidth variability, but also video bitrate variability. The challenge, however, lies in how to balance these two variabilities to yield consistent video quality without risking a buffer underrun. We propose an optimization solution that uses an online algorithm to adapt the video bitrate step-by-step, while applying dynamic programming at each step. We incorporate our solution into PANDA -- a practical rate adaptation algorithm designed for HAS deployment at scale.
|
: With the emergence of HAS, which rides on top of TCP, packet loss is no longer a concern. Instead, the main source of quality degradation becomes compression and downsampling artifacts. There have been several on-going efforts trying to tackle the video quality optimization problem for HAS, all from different perspectives. Mehrotra and Zhao consider an approach based on rate-distortion optimization and scalable video coding (SVC) @cite_7 . They formulate the problem with the buffer constraint in a way similar to ours, and obtain a sub-optimal solution based on Lagrangian multiplier. When attempting to extend their solution from SVC to redundantly encoded multiple rate levels, they have noted that it yields incorrect answer as the rate-distortion curve was not necessarily convex any more. In contrast, our dynamic programming solution does not require convexity in the rate-quality relationship.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2123293736"
],
"abstract": [
"Media streaming over unreliable networks such as the Internet is growing in popularity, but presents unique challenges when trying to get the user experience to be on par with classical mediums such as cable television. These networks have variable network conditions which not only vary between a set of points in the network, but also change over time. In this paper, we present a rate-distortion (R-D) optimized algorithm for adapting the bitrate of streaming media from a chunked encoding, where each chunk is available at multiple bitrates (or quality levels) or is scalable coded. The optimization takes into account the desired startup latency, the desired client buffer size, the current client buffer size, and the estimated network bandwidth. The problem is formulated as a distortion minimization problem subject to multiple rate constraints. For video content, the solution gives a gain of up to 3–4dB in PSNR in difficult portions of the video when compared to commonly used adaptation techniques and can achieve an arbitrarily small desired startup latency."
]
}
|
1401.5174
|
2949568263
|
In conventional HTTP-based adaptive streaming (HAS), a video source is encoded at multiple levels of constant bitrate representations, and a client makes its representation selections according to the measured network bandwidth. While greatly simplifying adaptation to the varying network conditions, this strategy is not the best for optimizing the video quality experienced by end users. Quality fluctuation can be reduced if the natural variability of video content is taken into consideration. In this work, we study the design of a client rate adaptation algorithm to yield consistent video quality. We assume that clients have visibility into incoming video within a finite horizon. We also take advantage of the client-side video buffer, by using it as a breathing room for not only network bandwidth variability, but also video bitrate variability. The challenge, however, lies in how to balance these two variabilities to yield consistent video quality without risking a buffer underrun. We propose an optimization solution that uses an online algorithm to adapt the video bitrate step-by-step, while applying dynamic programming at each step. We incorporate our solution into PANDA -- a practical rate adaptation algorithm designed for HAS deployment at scale.
|
: On the study of temporal pooling of video quality, a recent work @cite_3 have shown that the overall impression of a viewer towards a video is greatly influenced by the single most severe event while the duration is neglected, which corroborate our choice of the optimization objective. A more recent study @cite_19 dedicated to temporal pooling for HAS proposes a more complicated linear dynamic system model with the intent to capture the hysteresis effect in human visual response. Joseph and de Veciana @cite_6 uses the difference between mean quality and quality variability as the pooling metric.
|
{
"cite_N": [
"@cite_19",
"@cite_6",
"@cite_3"
],
"mid": [
"2069532321",
"",
"2074883210"
],
"abstract": [
"Newly developed HTTP-based video streaming technology enables flexible rate-adaptation in varying channel conditions. The users' Quality of Experience (QoE) of rate-adaptive HTTP video streams, however, is not well understood. Therefore, designing QoE-optimized rate-adaptive video streaming algorithms remains a challenging task. An important aspect of understanding and modeling QoE is to be able to predict the up-to-the-moment subjective quality of video as it is played. We propose a dynamic system model to predict the time-varying subjective quality (TVSQ) of rate-adaptive videos that is transported over HTTP. For this purpose, we built a video database and measured TVSQ via a subjective study. A dynamic system model is developed using the database and the measured human data. We show that the proposed model can effectively predict the TVSQ of rate-adaptive videos in an online manner, which is necessary to be able to conduct QoE-optimized online rate-adaptation for HTTP-based video streaming.",
"",
"It is generally recognized that severe video distortions that are transient in space and or time have a large effect on overall perceived video quality. In order to understand this phenomena, we study the distribution of spatio-temporally local quality scores obtained from several video quality assessment (VQA) algorithms on videos suffering from compression and lossy transmission over communication channels. We propose a content adaptive spatial and temporal pooling strategy based on the observed distribution. Our method adaptively emphasizes “worst” scores along both the spatial and temporal dimensions of a video sequence and also considers the perceptual effect of large-area cohesive motion flow such as egomotion. We demonstrate the efficacy of the method by testing it using three different VQA algorithms on the LIVE Video Quality database and the EPFL-PoliMI video quality database."
]
}
|
1401.5099
|
2077195786
|
Most peer-to-peer content distribution sys- tems require the peers to privilege the welfare of the overall system over greedily maximizing their own utility. When downloading a file broken up into multiple pieces, peers are often asked to pass on some possible download opportunities of common pieces in order to favor rare pieces. This is to avoid the missing piece syndrome, which throttles the download rate of the peer-to-peer system to that of downloading the file straight from the server. In other situations, peers are asked to stay in the system even though they have collected all the file's pieces and have an incentive to leave right away. We propose a mechanism which allows peers to act greedily and yet stabilizes the peer-to-peer content shar- ing system. Our mechanism combines a fountain code at the server to generate innovative new pieces, and a prioritization for the server to deliver pieces only to new peers. While by itself, neither the fountain code nor the prioritization of new peers alone stabilizes the system, we demonstrate that their combination does, through both analytical and numerical evaluation. I. INTRODUCTION
|
The missing chunk syndrome @cite_8 @cite_7 arises when a disproportionate number of peers in the system have all the same chunks but one. These nodes are denoted as the one-club' peers.
|
{
"cite_N": [
"@cite_7",
"@cite_8"
],
"mid": [
"2097832541",
"2095766758"
],
"abstract": [
"Peer-to-peer networks provide better scalability for the filesharing applications they underlie. Unlike traditional server-based approach such as FTP, maintaining a constant QoS with a fixed number of servers seems feasible, whatever the number of peers involved. However, a P2P filesharing network sometimes happens to saturate, notably in a semi-P2P filesharing architecture or during flashcrowds phase, and scalability may fail. Even \"smart\" networks can encounter the whole file but one piece downloaded case, which we call starvation. We suggest a simple and versatile filesharing model. It applies to all pieces-oriented filesharing protocols used in softwares such as MlDonkey or BitTorrent. Simulations of this model show that starvation may occur even during flashcrowds. We propose a theoretical explanation for the so-called starvation phenomenum.",
"Typical protocols for peer-to-peer file sharing over the Internet divide files to be shared into pieces. New peers strive to obtain a complete collection of pieces from other peers and from a seed. In this paper we identify a problem that can occur if the seeding rate is not large enough. The problem is that, even if the statistics of the system are symmetric in the pieces, there can be symmetry breaking, with one piece becoming very rare. If peers depart after obtaining a complete collection, they can tend to leave before helping other peers receive the rare piece. 1"
]
}
|
1401.5099
|
2077195786
|
Most peer-to-peer content distribution sys- tems require the peers to privilege the welfare of the overall system over greedily maximizing their own utility. When downloading a file broken up into multiple pieces, peers are often asked to pass on some possible download opportunities of common pieces in order to favor rare pieces. This is to avoid the missing piece syndrome, which throttles the download rate of the peer-to-peer system to that of downloading the file straight from the server. In other situations, peers are asked to stay in the system even though they have collected all the file's pieces and have an incentive to leave right away. We propose a mechanism which allows peers to act greedily and yet stabilizes the peer-to-peer content shar- ing system. Our mechanism combines a fountain code at the server to generate innovative new pieces, and a prioritization for the server to deliver pieces only to new peers. While by itself, neither the fountain code nor the prioritization of new peers alone stabilizes the system, we demonstrate that their combination does, through both analytical and numerical evaluation. I. INTRODUCTION
|
@cite_10 proves the stability of the P2P system when the peers are asked to stay in the system for some time after download completion that depends on the peer arrival process and download rate. As in the previous system, peers are asked to stay longer, either explicitly or by withholding download, in order to stabilize the system.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2015690067"
],
"abstract": [
"Peer-to-peer (P2P) communication in networks for file distribution and other applications is a powerful multiplier of network utility, due to its ability to exploit parallelism in a distributed way. As new variations are engineered, to provide less impact on service providers and to provide better quality of service, it is important to have a theoretical underpinning, to weigh the effectiveness of various methods for enhancing the service. This paper focuses on the stationary portion of file download in an unstructured P2P network, which typically follows for many hours after a flash crowd initiation. The contribution of the paper is to identify how much help is needed from the seeds, either fixed seeds or peers dwelling in the system after obtaining the complete file, to stabilize the system. It is shown that dominant cause for instability is the missing piece syndrome, whereby one piece becomes very rare in the network. It is shown that very little dwell time is necessary--even if there is very little help from a fixed seed, peers need to dwell on average no longer than it takes to upload one additional piece, after they have obtained a complete collection."
]
}
|
1401.5099
|
2077195786
|
Most peer-to-peer content distribution sys- tems require the peers to privilege the welfare of the overall system over greedily maximizing their own utility. When downloading a file broken up into multiple pieces, peers are often asked to pass on some possible download opportunities of common pieces in order to favor rare pieces. This is to avoid the missing piece syndrome, which throttles the download rate of the peer-to-peer system to that of downloading the file straight from the server. In other situations, peers are asked to stay in the system even though they have collected all the file's pieces and have an incentive to leave right away. We propose a mechanism which allows peers to act greedily and yet stabilizes the peer-to-peer content shar- ing system. Our mechanism combines a fountain code at the server to generate innovative new pieces, and a prioritization for the server to deliver pieces only to new peers. While by itself, neither the fountain code nor the prioritization of new peers alone stabilizes the system, we demonstrate that their combination does, through both analytical and numerical evaluation. I. INTRODUCTION
|
@cite_4 models a BitTorrent P2P network and studies its scalability using a fluid model. It also models the peer selection mechanism and shows the convergence of the peer selection mechanism to a Nash equilibrium under some incentive structure. The paper only analyzes the BitTorrent protocol, and does not propose a novel mechanism. The fluid model was also studied in @cite_27 .
|
{
"cite_N": [
"@cite_27",
"@cite_4"
],
"mid": [
"2102366001",
"2166245380"
],
"abstract": [
"Peer-to-peer (P2P) systems in general, and BitTorrent (BT) specifically, have been of significant interest to researchers and Internet users alike. Existing models of BT abstract away certain characteristics of the protocol that are important, which we address in this work. We present a simple yet accurate and easily extensible model of BT. The model's accuracy is validated through a rigorous simulation-based study and its extensibility is illustrated by incorporating recently proposed approaches to protocol changes in BT.",
"In this paper, we develop simple models to study the performance of BitTorrent, a second generation peer-to-peer (P2P) application. We first present a simple fluid model and study the scalability, performance and efficiency of such a file-sharing mechanism. We then consider the built-in incentive mechanism of BitTorrent and study its effect on network performance. We also provide numerical results based on both simulations and real traces obtained from the Internet."
]
}
|
1401.5099
|
2077195786
|
Most peer-to-peer content distribution sys- tems require the peers to privilege the welfare of the overall system over greedily maximizing their own utility. When downloading a file broken up into multiple pieces, peers are often asked to pass on some possible download opportunities of common pieces in order to favor rare pieces. This is to avoid the missing piece syndrome, which throttles the download rate of the peer-to-peer system to that of downloading the file straight from the server. In other situations, peers are asked to stay in the system even though they have collected all the file's pieces and have an incentive to leave right away. We propose a mechanism which allows peers to act greedily and yet stabilizes the peer-to-peer content shar- ing system. Our mechanism combines a fountain code at the server to generate innovative new pieces, and a prioritization for the server to deliver pieces only to new peers. While by itself, neither the fountain code nor the prioritization of new peers alone stabilizes the system, we demonstrate that their combination does, through both analytical and numerical evaluation. I. INTRODUCTION
|
Yang @cite_30 @cite_29 also analyzes the service capacity of peer to peer networks in both a transient and a stationary regime, and demonstrates that in both regimes the system is scalable. However, it does not consider the missing chunk syndrome but only an aggregate capacity over multiple files.
|
{
"cite_N": [
"@cite_30",
"@cite_29"
],
"mid": [
"2117047663",
"2099376907"
],
"abstract": [
"We study the 'service capacity' of peer to peer (P2P) file sharing applications. We begin by considering a transient regime which is key to capturing the ability of such systems to handle bursty traffic, e.g., flash crowds. In this context our models, based on age dependent branching processes, exhibit exponential growth in service capacity, and permit the study of sensitivity of this growth to system policies and parameters. Then we consider a model for such systems in steady state and show how the average delay seen by peers would scale in the offered load and rate at which peers exit the system. We find that the average delays scale well in the offered load. In particular the delays are upper bounded by some constant given any offered load and even decrease in the offered load if peers exit the system slowly. We validate many of our findings by analyzing traces obtained from a second generation P2P application called BitTorrent.",
"In this paper we model and study the performance of peer-to-peer (P2P) file sharing systems in terms of their 'service capacity'. We identify two regimes of interest: the transient and stationary regimes. We show that in both regimes, the performance of P2P systems exhibits a favorable scaling with the offered load. P2P systems achieve this by efficiently leveraging the service capacity of other peers, who possibly are concurrently downloading the same file. Therefore to improve the performance, it is important to design mechanisms to give peers incentives for sharing cooperation. One approach is to introduce mechanisms for resource allocation that are 'fair', such that a peer's performance improves with his contributions. We find that some intuitive 'fairness' notions may unexpectedly lead to 'unfair' allocations, which do not provide the right incentives for peers. Thus, implementation of P2P systems may want to compromise the degree of 'fairness' in favor of maintaining system robustness and reducing overheads."
]
}
|
1401.5099
|
2077195786
|
Most peer-to-peer content distribution sys- tems require the peers to privilege the welfare of the overall system over greedily maximizing their own utility. When downloading a file broken up into multiple pieces, peers are often asked to pass on some possible download opportunities of common pieces in order to favor rare pieces. This is to avoid the missing piece syndrome, which throttles the download rate of the peer-to-peer system to that of downloading the file straight from the server. In other situations, peers are asked to stay in the system even though they have collected all the file's pieces and have an incentive to leave right away. We propose a mechanism which allows peers to act greedily and yet stabilizes the peer-to-peer content shar- ing system. Our mechanism combines a fountain code at the server to generate innovative new pieces, and a prioritization for the server to deliver pieces only to new peers. While by itself, neither the fountain code nor the prioritization of new peers alone stabilizes the system, we demonstrate that their combination does, through both analytical and numerical evaluation. I. INTRODUCTION
|
@cite_25 generalizes the peer-to-peer setup to coupon collection, and studies the asymptotic behavior of such systems with respect to the sojourn time in the system under different types of encounters, including random encounters (which is what we consider in this paper). This work also considers the missing chunk syndrome but under a closed system.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2010859647"
],
"abstract": [
"Motivated by the study of peer-to-peer file swarming systems a la BitTorrent, we introduce a probabilistic model of coupon replication systems. These systems consist of users, aiming to complete a collection of distinct coupons. Users are characterised by their current collection of coupons, and leave the system once they complete their coupon collection. The system evolution is then specified by describing how users of distinct types meet, and which coupons get replicated upon such encounters.For open systems, with exogenous user arrivals, we derive necessary and sufficient stability conditions in a layered scenario, where encounters are between users holding the same number of coupons. We also consider a system where encounters are between users chosen uniformly at random from the whole population. We show that performance, captured by sojourn time, is asymptotically optimal in both systems as the number of coupon types becomes large.We also consider closed systems with no exogenous user arrivals. In a special scenario where users have only one missing coupon, we evaluate the size of the population ultimately remaining in the system, as the initial number of users, N, goes to infinity. We show that this decreases geometrically with the number of coupons, K. In particular, when the ratio K log(N) is above a critical threshold, we prove that this number of left-overs is of order log(log(N)).These results suggest that performance of file swarming systems does not depend critically on either altruistic user behavior, or on load balancing strategies such as rarest first."
]
}
|
1401.5099
|
2077195786
|
Most peer-to-peer content distribution sys- tems require the peers to privilege the welfare of the overall system over greedily maximizing their own utility. When downloading a file broken up into multiple pieces, peers are often asked to pass on some possible download opportunities of common pieces in order to favor rare pieces. This is to avoid the missing piece syndrome, which throttles the download rate of the peer-to-peer system to that of downloading the file straight from the server. In other situations, peers are asked to stay in the system even though they have collected all the file's pieces and have an incentive to leave right away. We propose a mechanism which allows peers to act greedily and yet stabilizes the peer-to-peer content shar- ing system. Our mechanism combines a fountain code at the server to generate innovative new pieces, and a prioritization for the server to deliver pieces only to new peers. While by itself, neither the fountain code nor the prioritization of new peers alone stabilizes the system, we demonstrate that their combination does, through both analytical and numerical evaluation. I. INTRODUCTION
|
@cite_18 studies the stability in the case of two-chunks systems. This is complementary to our works, as the benefit of our approach depend on having a file decomposed into a large number of chunks (namely, as the available bandwidth scales with the number of chunks @math to a file).
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"2080384304"
],
"abstract": [
"We consider five different peer-to-peer file-sharing systems with two chunks, assuming non-altruistic peers who leave the system immediately after downloading the second chunk. Our aim is to find chunk selection algorithms that have provably stable performance with any input rate. We show that many algorithms that first looked promising lead to unstable or oscillating behaviour. However, we end up with a system with desirable properties. Most of our rigorous results concern the corresponding deterministic large system limits, but in the two simplest cases we provide proofs for the stochastic systems also."
]
}
|
1401.5099
|
2077195786
|
Most peer-to-peer content distribution sys- tems require the peers to privilege the welfare of the overall system over greedily maximizing their own utility. When downloading a file broken up into multiple pieces, peers are often asked to pass on some possible download opportunities of common pieces in order to favor rare pieces. This is to avoid the missing piece syndrome, which throttles the download rate of the peer-to-peer system to that of downloading the file straight from the server. In other situations, peers are asked to stay in the system even though they have collected all the file's pieces and have an incentive to leave right away. We propose a mechanism which allows peers to act greedily and yet stabilizes the peer-to-peer content shar- ing system. Our mechanism combines a fountain code at the server to generate innovative new pieces, and a prioritization for the server to deliver pieces only to new peers. While by itself, neither the fountain code nor the prioritization of new peers alone stabilizes the system, we demonstrate that their combination does, through both analytical and numerical evaluation. I. INTRODUCTION
|
@cite_24 observes that missing chunk syndrome leads to a bandwidth bottleneck at the seed that can lead to the underutilization of the aggregate capacity, and proposes to share this capacity across the download of multiple files.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"2117391106"
],
"abstract": [
"Recent work on BitTorrent swarms has demonstrated that a bandwidth bottleneck at the seed can lead to the underutilization of the aggregate swarm capacity. Bandwidth underutilization also occurs naturally in mobile peer-to-peer swarms, as a mobile peer may not always be within the range of peers storing the content it desires. We argue in this paper that, in both cases, idle bandwidth can be exploited to allow content sharing across multiple swarms, thereby forming a universal swarm system. We propose a model for universal swarms that applies to a variety of peer-to-peer environments, both mobile and online. Through a fluid limit analysis, we demonstrate that universal swarms have significantly improved stability properties compared to individually autonomous swarms. In addition, by studying a swarm's stationary behavior, we identify content replication ratios across different swarms that minimize the average sojourn time in the system. We then propose a content exchange scheme between peers that leads to these optimal replication ratios, and study its convergence numerically."
]
}
|
1401.5099
|
2077195786
|
Most peer-to-peer content distribution sys- tems require the peers to privilege the welfare of the overall system over greedily maximizing their own utility. When downloading a file broken up into multiple pieces, peers are often asked to pass on some possible download opportunities of common pieces in order to favor rare pieces. This is to avoid the missing piece syndrome, which throttles the download rate of the peer-to-peer system to that of downloading the file straight from the server. In other situations, peers are asked to stay in the system even though they have collected all the file's pieces and have an incentive to leave right away. We propose a mechanism which allows peers to act greedily and yet stabilizes the peer-to-peer content shar- ing system. Our mechanism combines a fountain code at the server to generate innovative new pieces, and a prioritization for the server to deliver pieces only to new peers. While by itself, neither the fountain code nor the prioritization of new peers alone stabilizes the system, we demonstrate that their combination does, through both analytical and numerical evaluation. I. INTRODUCTION
|
@cite_0 studies the stability of P2P system, but taking into account the locality of the peers and the RTT between them to observe that P2P system might exhibit what they denote as super-scalability, namely a reduction of the delays as the number of peers grow.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2150381607"
],
"abstract": [
"We propose a new model for peer-to-peer networking which takes the network bottlenecks into account beyond the access. This model can cope with key features of P2P networking like degree or locality constraints together with the fact that distant peers often have a smaller rate than nearby peers. Using a network model based on rate functions, we give a closed form expression of peers download performance in the system's fluid limit, as well as approximations for the other cases. Our results show the existence of realistic settings for which the average download time is a decreasing function of the load, a phenomenon that we call super-scalability."
]
}
|
1401.5151
|
2058324238
|
A signal recovery scheme is developed for linear observation systems based on expectation consistent (EC) mean field approximation. Approximate message passing (AMP) is known to be consistent with the results obtained using the replica theory, which is supposed to be exact in the large system limit, when each entry of the observation matrix is independently generated from an identical distribution. However, this is not necessarily the case for general matrices. We show that EC recovery exhibits consistency with the replica theory for a wider class of random observation matrices. This is numerically confirmed by experiments for the Bayesian optimal signal recovery of compressed sensing using random row-orthogonal matrices.
|
Reference @cite_17 used the replica method to find a decoupled formulation for the input-output statistics of a CS system whose measurement matrix is composed of independently and identically distributed (i.i.d.) entries. As a corollary, this leads to a computationally feasible characterization of the MMSE as well. The MMSE of a similar i.i.d. setup was later evaluated directly in @cite_19 by using mathematically rigorous methods. Numerical results therein verified the accuracy of the earlier replica analysis. Finally, non-i.i.d. sensing matrices where considered in @cite_9 , where the replica method was used to find the support recovery performance of a class of CS systems.
|
{
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_17"
],
"mid": [
"2949267240",
"2160051394",
"2550925785"
],
"abstract": [
"Compressed sensing is a signal processing technique in which data is acquired directly in a compressed form. There are two modeling approaches that can be considered: the worst-case (Hamming) approach and a statistical mechanism, in which the signals are modeled as random processes rather than as individual sequences. In this paper, the second approach is studied. In particular, we consider a model of the form @math , where each comportment of @math is given by @math , where @math are i.i.d. Gaussian random variables, and @math are binary random variables independent of @math , and not necessarily independent and identically distributed (i.i.d.), @math is a random matrix with i.i.d. entries, and @math is white Gaussian noise. Using a direct relationship between optimum estimation and certain partition functions, and by invoking methods from statistical mechanics and from random matrix theory (RMT), we derive an asymptotic formula for the minimum mean-square error (MMSE) of estimating the input vector @math given @math and @math , as @math , keeping the measurement rate, @math , fixed. In contrast to previous derivations, which are based on the replica method, the analysis carried out in this paper is rigorous.",
"Consider a Bernoulli-Gaussian complex n-vector whose components are Vi = XiBi, with Xi C N(0, Px) and binary Bi mutually independent and iid across i. This random q-sparse vector is multiplied by a square random matrix U, and a randomly chosen subset, of average size n p, p ∈ [0,1], of the resulting vector components is then observed in additive Gaussian noise. We extend the scope of conventional noisy compressive sampling models where U is typically a matrix with iid components, to allow U satisfying a certain freeness condition. This class of matrices encompasses Haar matrices and other unitarily invariant matrices. We use the replica method and the decoupling principle of Guo and Verdu, as well as a number of information-theoretic bounds, to study the input-output mutual information and the support recovery error rate in the limit of n → ∞. We also extend the scope of the large deviation approach of Rangan and characterize the performance of a class of estimators encompassing thresholded linear MMSE and l1 relaxation.",
"Compressed sensing deals with the reconstruction of a high-dimensional signal from far fewer linear measurements, where the signal is known to admit a sparse representation in a certain linear space. The asymptotic scaling of the number of measurements needed for reconstruction as the dimension of the signal increases has been studied extensively. This work takes a fundamental perspective on the problem of inferring about individual elements of the sparse signal given the measurements, where the dimensions of the system become increasingly large. Using the replica method, the outcome of inferring about any fixed collection of signal elements is shown to be asymptotically decoupled, i.e., those elements become independent conditioned on the measurements. Furthermore, the problem of inferring about each signal element admits a single-letter characterization in the sense that the posterior distribution of the element, which is a sufficient statistic, becomes asymptotically identical to the posterior of inferring about the same element in scalar Gaussian noise. The result leads to simple characterization of all other elemental metrics of the compressed sensing problem, such as the mean squared error and the error probability for reconstructing the support set of the sparse signal. Finally, the single-letter characterization is rigorously justified in the special case of sparse measurement matrices where belief propagation becomes asymptotically optimal."
]
}
|
1401.5151
|
2058324238
|
A signal recovery scheme is developed for linear observation systems based on expectation consistent (EC) mean field approximation. Approximate message passing (AMP) is known to be consistent with the results obtained using the replica theory, which is supposed to be exact in the large system limit, when each entry of the observation matrix is independently generated from an identical distribution. However, this is not necessarily the case for general matrices. We show that EC recovery exhibits consistency with the replica theory for a wider class of random observation matrices. This is numerically confirmed by experiments for the Bayesian optimal signal recovery of compressed sensing using random row-orthogonal matrices.
|
To the best of our knowledge, computationally feasible algorithms approximately performing the Bayesian recovery were initially developed for a simple perceptron (linear classifier) @cite_13 and later for CDMA @cite_20 @cite_32 . Recently, a similar idea was applied for CS @cite_26 @cite_29 @cite_3 as approximate message passing (AMP), and was summarized as a general formulation termed generalized approximate message passing (GAMP) @cite_1 . However, these studies rely on the assumption that each entry of @math , @math , is i.i.d., and the appropriateness for the employment to other ensembles is not guaranteed. In fact, the necessity for considering a certain characteristic feature of @math in constructing the approximation was pointed out in @cite_30 , and its significance was tested for the simple perceptron @cite_28 , CDMA @cite_4 , and MIMO @cite_27 . Here, we show how this approach is employed for the signal recovery of linear observations and examine its significance for an example of CS.
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_28",
"@cite_29",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_27",
"@cite_13",
"@cite_20"
],
"mid": [
"2073521724",
"2082029531",
"2081785908",
"1977099572",
"",
"2951090971",
"2160968730",
"",
"1556236064",
"",
"1984830458"
],
"abstract": [
"We develop a generalization of the TAP mean field approach of disorder physics which makes the method applicable to the computation of approximate averages in probabilistic models for real data. In contrast to the conventional TAP approach, where the knowledge of the distribution of couplings between the random variables is required, our method adapts to the concrete set of couplings. We show the significance of approach in two ways: Our approach reproduces replica symmetric results for a wide class of toy models (assuming a nonglassy phase) with given disorder distributions in the thermodynamic limit. On the other hand, simulations on a real data model demonstrate that the method achieves more accurate predictions as compared to conventional TAP approaches.",
"Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity–undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity–undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity–undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.",
"An approach to analyze the performance of the code division multiple access (CDMA) scheme, which is a core technology used in modern wireless communication systems, is provided. The approach characterizes the objective system by the eigenvalue spectrum of a cross-correlation matrix composed of signature sequences used in CDMA communication, which enable us to handle a wider class of CDMA systems beyond the basic model reported by Tanaka in Europhys. Lett., 54 (2001) 540. The utility of the scheme is shown by analyzing a system in which the generation of signature sequences is designed for enhancing the orthogonality.",
"Learning behavior of simple perceptrons is analyzed for a teacher–student scenario in which output labels are provided by a teacher network for a set of possibly correlated input patterns, and such that the teacher and student networks are of the same type. Our main concern is the effect of statistical correlations among the input patterns on learning performance. For this purpose, we extend to the teacher–student scenario a methodology for analyzing randomly labeled patterns recently developed in Shinzato and Kabashima 2008 J. Phys. A: Math. Theor. 41 324013. This methodology is used for analyzing situations in which orthogonality of the input patterns is enhanced in order to optimize the learning performance.",
"",
"We consider the estimation of an i.i.d. random vector observed through a linear transform followed by a componentwise, probabilistic (possibly nonlinear) measurement channel. A novel algorithm, called generalized approximate message passing (GAMP), is presented that provides computationally efficient approximate implementations of max-sum and sum-problem loopy belief propagation for such problems. The algorithm extends earlier approximate message passing methods to incorporate arbitrary distributions on both the input and output of the transform and can be applied to a wide range of problems in nonlinear compressed sensing and learning. Extending an analysis by Bayati and Montanari, we argue that the asymptotic componentwise behavior of the GAMP method under large, i.i.d. Gaussian transforms is described by a simple set of state evolution (SE) equations. From the SE equations, one can predict the asymptotic value of virtually any componentwise performance metric including mean-squared error or detection accuracy. Moreover, the analysis is valid for arbitrary input and output distributions, even when the corresponding optimization problems are non-convex. The results match predictions by Guo and Wang for relaxed belief propagation on large sparse matrices and, in certain instances, also agree with the optimal performance predicted by the replica method. The GAMP methodology thus provides a computationally efficient methodology, applicable to a large class of non-Gaussian estimation problems with precise asymptotic performance guarantees.",
"We present a theory to analyze the performance of the parallel interference canceller (PIC) for code-division multiple-access (CDMA) multiuser detection, applied to a randomly spread, fully synchronous baseband uncoded CDMA channel model with additive white Gaussian noise under perfect power control in the large-system limit. We reformulate PIC as an approximation to the belief propagation algorithm for the detection problem. We then apply the density evolution framework to analyze its detection dynamics. It turns out that density evolution for PIC is essentially the same as statistical neurodynamics, a theory to describe dynamics of a certain type of neural network model. Adopting this correspondence, we develop the density evolution framework for PIC using statistical neurodynamics. The resulting formulas, however, are only approximately correct for describing detection dynamics of PIC even in the large-system limit, because we ignore the Onsager reaction terms in the derivation. We then propose a modified PIC algorithm, in which we subtract the Onsager reaction terms algorithmically, for which the density evolution formulas give a correct description of the detection dynamics in the large-system limit.",
"",
"The Kronecker channel model of wireless communication is analyzed using statistical mechanics methods. In the model, spatial proximities among transmission reception antennas are taken into account as certain correlation matrices, which generally yield nontrivial dependence among symbols to be estimated. This prevents accurate assessment of the communication performance by naively using a previously developed analytical scheme based on a matrix integration formula. In order to resolve this difficulty, we develop a formalism that can formally handle the correlations in Kronecker models based on the known scheme. Unfortunately, direct application of the developed scheme is, in general, practically difficult. However, the formalism is still useful, indicating that the effect of the correlations generally increase after the fourth order with respect to correlation strength. Therefore, the known analytical scheme offers a good approximation in performance evaluation when the correlation strength is sufficiently small. For a class of specific correlation, we show that the performance analysis can be mapped to the problem of one-dimensional spin systems in random fields, which can be investigated without approximation by the belief propagation algorithm.",
"",
"An iterative algorithm for the multiuser detection problem that arises in code division multiple access (CDMA) systems is developed on the basis of Pearl's belief propagation (BP). We show that the BP-based algorithm exhibits nearly optimal performance in a practical time scale by utilizing the central limit theorem and self-averaging property appropriately, whereas direct application of BP to the detection problem is computationally difficult and far from practical. We further present close relationships of the proposed algorithm to the Thouless–Anderson–Palmer approach and replica analysis known in spin-glass research."
]
}
|
1401.5051
|
16289923
|
In this paper, we present a novel approach called WaterFowl for the storage of RDF triples that addresses some key issues in the contexts of big data and the Semantic Web. The architecture of our prototype, largely based on the use of succinct data structures, enables the representation of triples in a self-indexed, compact manner without requiring decompression at query answering time. Moreover, it is adapted to eciently support RDF and RDFS entailment regimes thanks to an optimized encoding of ontology concepts and properties that does not require a complete inference materialization or extensive query rewriting algorithms. This approach implies to make a distinction between the terminological and the assertional components of the knowledge base early in the process of data preparation, i:e:, preprocessing the data before storing it in our structures. The paper describes the complete architecture of this system and presents some preliminary results obtained from evaluations conducted on our rst prototype.
|
Although the first RDF stores appeared in 2002, @math , @cite_15 , this research field became really active in 2007 starting with the publication of @cite_22 . Before that paper, most systems where storing their triples in a relational database management system as the backend storage, using different approaches, @math , triple table ( @math , a single table with 3 columns for s,p and o) or different variants such as clustered property table or property-class table. Abadi 's paper motivated the development of systems which were not using relational database management systems as a storage layer and were considering indexes with more attention than previous solutions. Hence, solutions such as Hexastore @cite_17 and RDF-3X @cite_8 were designed using multiple indexes, respectively 6 and 15, which had a direct impact on the performance of query answering but also on the memory footprint of databases. Matrix Bit loaded @cite_3 is another multiple indexes solution which stores its data into bit matrices. Compared to these systems, our approach proposes a single structure that enables indexed access on the three components of the triples.
|
{
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_3",
"@cite_15",
"@cite_17"
],
"mid": [
"2140613126",
"2000656232",
"2161584750",
"",
"2135577024"
],
"abstract": [
"Efficient management of RDF data is an important factor in realizing the Semantic Web vision. Performance and scalability issues are becoming increasingly pressing as Semantic Web technology is applied to real-world applications. In this paper, we examine the reasons why current data management solutions for RDF data scale poorly, and explore the fundamental scalability limitations of these approaches. We review the state of the art for improving performance for RDF databases and consider a recent suggestion, \"property tables.\" We then discuss practically and empirically why this solution has undesirable features. As an improvement, we propose an alternative solution: vertically partitioning the RDF data. We compare the performance of vertical partitioning with prior art on queries generated by a Web-based RDF browser over a large-scale (more than 50 million triples) catalog of library data. Our results show that a vertical partitioned schema achieves similar performance to the property table technique while being much simpler to design. Further, if a column-oriented DBMS (a database architected specially for the vertically partitioned case) is used instead of a row-oriented DBMS, another order of magnitude performance improvement is observed, with query times dropping from minutes to several seconds.",
"RDF is a data model for schema-free structured information that is gaining momentum in the context of Semantic-Web data, life sciences, and also Web 2.0 platforms. The \"pay-as-you-go\" nature of RDF and the flexible pattern-matching capabilities of its query language SPARQL entail efficiency and scalability challenges for complex queries including long join paths. This paper presents the RDF-3X engine, an implementation of SPARQL that achieves excellent performance by pursuing a RISC-style architecture with streamlined indexing and query processing. The physical design is identical for all RDF-3X databases regardless of their workloads, and completely eliminates the need for index tuning by exhaustive indexes for all permutations of subject-property-object triples and their binary and unary projections. These indexes are highly compressed, and the query processor can aggressively leverage fast merge joins with excellent performance of processor caches. The query optimizer is able to choose optimal join orders even for complex queries, with a cost model that includes statistical synopses for entire join paths. Although RDF-3X is optimized for queries, it also provides good support for efficient online updates by means of a staging architecture: direct updates to the main database indexes are deferred, and instead applied to compact differential indexes which are later merged into the main indexes in a batched manner. Experimental studies with several large-scale datasets with more than 50 million RDF triples and benchmark queries that include pattern matching, manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform the previously best alternatives by one or two orders of magnitude.",
"The Semantic Web community, until now, has used traditional database systems for the storage and querying of RDF data. The SPARQL query language also closely follows SQL syntax. As a natural consequence, most of the SPARQL query processing techniques are based on database query processing and optimization techniques. For SPARQL join query optimization, previous works like RDF-3X and Hexastore have proposed to use 6-way indexes on the RDF data. Although these indexes speed up merge-joins by orders of magnitude, for complex join queries generating large intermediate join results, the scalability of the query processor still remains a challenge. In this paper, we introduce (i) BitMat - a compressed bit-matrix structure for storing huge RDF graphs, and (ii) a novel, light-weight SPARQL join query processing method that employs an initial pruning technique, followed by a variable-binding-matching algorithm on BitMats to produce the final results. Our query processing method does not build intermediate join tables and works directly on the compressed data. We have demonstrated our method against RDF graphs of upto 1.33 billion triples - the largest among results published until now (single-node, non-parallel systems), and have compared our method with the state-of-the-art RDF stores - RDF-3X and MonetDB. Our results show that the competing methods are most effective with highly selective queries. On the other hand, BitMat can deliver 2-3 orders of magnitude better performance on complex, low-selectivity queries over massive data.",
"",
"Despite the intense interest towards realizing the Semantic Web vision, most existing RDF data management schemes are constrained in terms of efficiency and scalability. Still, the growing popularity of the RDF format arguably calls for an effort to offset these drawbacks. Viewed from a relational-database perspective, these constraints are derived from the very nature of the RDF data model, which is based on a triple format. Recent research has attempted to address these constraints using a vertical-partitioning approach, in which separate two-column tables are constructed for each property. However, as we show, this approach suffers from similar scalability drawbacks on queries that are not bound by RDF property value. In this paper, we propose an RDF storage scheme that uses the triple nature of RDF as an asset. This scheme enhances the vertical partitioning idea and takes it to its logical conclusion. RDF data is indexed in six possible ways, one for each possible ordering of the three RDF elements. Each instance of an RDF element is associated with two vectors; each such vector gathers elements of one of the other types, along with lists of the third-type resources attached to each vector element. Hence, a sextuple-indexing scheme emerges. This format allows for quick and scalable general-purpose query processing; it confers significant advantages (up to five orders of magnitude) compared to previous approaches for RDF data management, at the price of a worst-case five-fold increase in index space. We experimentally document the advantages of our approach on real-world and synthetic data sets with practical queries."
]
}
|
1401.5051
|
16289923
|
In this paper, we present a novel approach called WaterFowl for the storage of RDF triples that addresses some key issues in the contexts of big data and the Semantic Web. The architecture of our prototype, largely based on the use of succinct data structures, enables the representation of triples in a self-indexed, compact manner without requiring decompression at query answering time. Moreover, it is adapted to eciently support RDF and RDFS entailment regimes thanks to an optimized encoding of ontology concepts and properties that does not require a complete inference materialization or extensive query rewriting algorithms. This approach implies to make a distinction between the terminological and the assertional components of the knowledge base early in the process of data preparation, i:e:, preprocessing the data before storing it in our structures. The paper describes the complete architecture of this system and presents some preliminary results obtained from evaluations conducted on our rst prototype.
|
Our approach is inspired from the HDT @cite_6 solution which mainly focuses on data exchange (and thus on data compression). Its former motivation was to support the exchange of large data sets highly compressed using SDS. Later, @cite_0 presented HDT FoQ, an extension of the structure of HDT that enables some simple data retrieving operations. Nevertheless, this last contribution was not allowing any form of reasoning nor was providing methods the query the data sets. In fact, WaterFowl brings the HDT FoQ approach further to its logical conclusion by using a pair of wavelet trees in the object layer (HDT FoQ uses an adjacency list for this layer) and by integrating a complete query processing solution with complete RDFS reasoning (, handling any inference using RDFS expressiveness). This is made possible by an adaptation of both the dictionary and the triple structures. Note that this adaptation enables to retain the nice compression properties of HDT FoQ (see Section 5).
|
{
"cite_N": [
"@cite_0",
"@cite_6"
],
"mid": [
"51213517",
"1599916961"
],
"abstract": [
"Huge RDF datasets are currently exchanged on textual RDF formats, hence consumers need to post-process them using RDF stores for local consumption, such as indexing and SPARQL query. This results in a painful task requiring a great effort in terms of time and computational resources. A first approach to lightweight data exchange is a compact (binary) RDF serialization format called HDT . In this paper, we show how to enhance the exchanged HDT with additional structures to support some basic forms of SPARQL query resolution without the need of \"unpacking\" the data. Experiments show that i) with an exchanging efficiency that outperforms universal compression, ii) post-processing now becomes a fast process which iii) provides competitive query performance at consumption.",
"Increasingly huge RDF data sets are being published on the Web. Currently, they use different syntaxes of RDF, contain high levels of redundancy and have a plain indivisible structure. All this leads to fuzzy publications, inefficient management, complex processing and lack of scalability. This paper presents a novel RDF representation (HDT) which takes advantage of the structural properties of RDF graphs for splitting and representing, efficiently, three components of RDF data: Header, Dictionary and Triples structure. On-demand management operations can be implemented on top of HDT representation. Experiments show that data sets can be compacted in HDT by more than fifteen times the current naive representation, improving parsing and processing while keeping a consistent publication scheme. For exchanging, specific compression techniques over HDT improve current compression solutions."
]
}
|
1401.5051
|
16289923
|
In this paper, we present a novel approach called WaterFowl for the storage of RDF triples that addresses some key issues in the contexts of big data and the Semantic Web. The architecture of our prototype, largely based on the use of succinct data structures, enables the representation of triples in a self-indexed, compact manner without requiring decompression at query answering time. Moreover, it is adapted to eciently support RDF and RDFS entailment regimes thanks to an optimized encoding of ontology concepts and properties that does not require a complete inference materialization or extensive query rewriting algorithms. This approach implies to make a distinction between the terminological and the assertional components of the knowledge base early in the process of data preparation, i:e:, preprocessing the data before storing it in our structures. The paper describes the complete architecture of this system and presents some preliminary results obtained from evaluations conducted on our rst prototype.
|
Concerning query processing in the presence of inferences, several approaches have been proposed. Among them, the materialization of all inferences within the data storage solution is a popular one, which is generally performed using an off-line forward chaining approach. This avoids query formulation at run-time but is associated with an expansion of the memory footprint. Sesame http: www.openrdf.org is a commercial system adopting inference materialization. Another approach consists in performing query rewriting at run time. It guarantees a light memory footprint but it is associated with the possible generation of an exponential number of queries. Presto @cite_14 and Requiem @cite_18 are system adopting this approach with different algorithms. By adopting a rewriting approach into non recursive datalog, Presto achieves to perform this operation in non exponential time. The technique proposed in @cite_23 produces worst-case polynomial rewritings but the complex structure it is based on makes its evaluation complex to perform.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_23"
],
"mid": [
"1756261662",
"1606607399",
"1784178627"
],
"abstract": [
"The QL profile of OWL 2 has been designed so that it is possible to use database technology for query answering via query rewriting. We present a comparison of our resolution based rewriting algorithm with the standard algorithm proposed by , implementing both and conducting an empirical evaluation using ontologies and queries derived from realistic applications. The results indicate that our algorithm produces significantly smaller rewritings in most cases, which could be important for practicality in realistic applications.",
"The DL-Lite family of Description Logics has been designed with the specific goal of allowing for answering complex queries (in particular, conjunctive queries) over ontologies with very large instance sets (ABoxes). So far, in DL-Lite systems, this goal has been actually achieved only for relatively simple (short) conjunctive queries. In this paper we present Presto, a new query answering technique for DL-Lite ontologies, and an experimental comparison of Presto with the main previous approaches to query answering in DL-Lite. In practice, our experiments show that, in real ontologies, current techniques are only able to answer conjunctive queries of less than 7-10 atoms (depending on the complexity of the TBox), while Presto is actually able to handle conjunctive queries of up to 30 atoms. Furthermore, in the cases that are already successfully handled by previous approaches, Presto is significantly more efficient.",
"We consider the setting of ontological database access, where an A-box is given in form of a relational database D and where a Boolean conjunctive query q has to be evaluated against D modulo a T-box Σ formulated in DL-Lite or Linear Datalog±. It is well-known that (Σ, q) can be rewritten into an equivalent nonrecursive Datalog program P that can be directly evaluated over D. However, for Linear Datalog± or for DL-Lite versions that allow for role inclusion, the rewriting methods described so far result in a nonrecursive Datalog program P of size exponential in the joint size of Σ and q. This gives rise to the interesting question of whether such a rewriting necessarily needs to be of exponential size. In this paper we show that it is actually possible to translate (Σ, q) into a polynomially sized equivalent nonrecursive Datalog program P."
]
}
|
1401.5051
|
16289923
|
In this paper, we present a novel approach called WaterFowl for the storage of RDF triples that addresses some key issues in the contexts of big data and the Semantic Web. The architecture of our prototype, largely based on the use of succinct data structures, enables the representation of triples in a self-indexed, compact manner without requiring decompression at query answering time. Moreover, it is adapted to eciently support RDF and RDFS entailment regimes thanks to an optimized encoding of ontology concepts and properties that does not require a complete inference materialization or extensive query rewriting algorithms. This approach implies to make a distinction between the terminological and the assertional components of the knowledge base early in the process of data preparation, i:e:, preprocessing the data before storing it in our structures. The paper describes the complete architecture of this system and presents some preliminary results obtained from evaluations conducted on our rst prototype.
|
The encoding of ontology elements, @math , concepts and properties, used in our system is related to a third approach which consists in encoding elements in a clever way that retains the subsumption hierarchy. This is the approach presented in @cite_9 and implemented in the Quest system (a relational database management system). The work of Rodriguez-Muro @math @cite_9 relies on integer identifiers modeling the subsumption relationships which are being used to rewrite SQL queries ranging over identifiers intervals, @math , specifying boundaries over indexed fields in the WHERE clause of a SQL query. In comparison, our work tackles the encoding at the bit level and focuses on the sharing of common prefixes in the encoding of the identifiers (see Section 4.1). This approach allows us to rewrite the queries in terms of and operations, @math , searching for a pattern corresponding to some of the most significant bits of a concept or property identifier. Furthermore, it allows high rate compression and does not require extra specific indexing processes.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"1804717298"
],
"abstract": [
"Current techniques for query answering over DL-Lite ontologies have severe limitations in practice, since they either produce complex queries that are inefficient during execution, or require expensive data pre-processing. In light of this, we present two complementary sets of results that aim at improving the overall peformance of query answering systems. We show how to create ABox repositories that are complete w.r.t. a significant portion of DL-Lite TBoxes, including those expressed in RDFS, but where the data is not explicitly expanded. Second, we show how to characterize ABox completeness by means of dependencies, and how to use these and equivalence to optimize DL-Lite TBoxes. These results allow us to reduce the cost of query rewriting, often dramatically, and to generate highly efficient queries. We have implemented a novel system for query answering over DL-Lite ontologies that incorporates these techniques, and we present a series of data-intensive evaluations that show their effectiveness."
]
}
|
1401.5051
|
16289923
|
In this paper, we present a novel approach called WaterFowl for the storage of RDF triples that addresses some key issues in the contexts of big data and the Semantic Web. The architecture of our prototype, largely based on the use of succinct data structures, enables the representation of triples in a self-indexed, compact manner without requiring decompression at query answering time. Moreover, it is adapted to eciently support RDF and RDFS entailment regimes thanks to an optimized encoding of ontology concepts and properties that does not require a complete inference materialization or extensive query rewriting algorithms. This approach implies to make a distinction between the terminological and the assertional components of the knowledge base early in the process of data preparation, i:e:, preprocessing the data before storing it in our structures. The paper describes the complete architecture of this system and presents some preliminary results obtained from evaluations conducted on our rst prototype.
|
Finally, our solution focuses on query processing of SPARQL queries. It aims to minimize the memory footprint required during query execution and to perform optimizations in terms of SDS operations complexities: , , , and . Adapting some of the heuristics presented in @cite_21 , optimization approaches of @cite_19 and @cite_16 , the system optimizes execution of SDS operations. The ordering of basic graph patterns execution also takes into account simple statistics computed when generating the dictionaries (see Section ). BigOWLIM http: www.ontotext.com owlim is, like most existing RDF database systems, @math , RDF-3X and Jena TDB http: jena.apache.org documentation tdb , taking benefits of data statistics to organize the order of BGP and thus optimize queries.
|
{
"cite_N": [
"@cite_19",
"@cite_21",
"@cite_16"
],
"mid": [
"1993786250",
"2133479441",
"2132612426"
],
"abstract": [
"With the proliferation of the RDF data format, engines for RDF query processing are faced with very large graphs that contain hundreds of millions of RDF triples. This paper addresses the resulting scalability problems. Recent prior work along these lines has focused on indexing and other physical-design issues. The current paper focuses on join processing, as the fine-grained and schema-relaxed use of RDF often entails star- and chain-shaped join queries with many input streams from index scans. We present two contributions for scalable join processing. First, we develop very light-weight methods for sideways information passing between separate joins at query run-time, to provide highly effective filters on the input streams of joins. Second, we improve previously proposed algorithms for join-order optimization by more accurate selectivity estimations for very large RDF graphs. Experimental studies with several RDF datasets, including the UniProt collection, demonstrate the performance gains of our approach, outperforming the previously fastest systems by more than an order of magnitude.",
"Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.",
"In this paper, we formalize the problem of Basic Graph Pattern (BGP) optimization for SPARQL queries and main memory graph implementations of RDF data. We define and analyze the characteristics of heuristics for selectivity-based static BGP optimization. The heuristics range from simple triple pattern variable counting to more sophisticated selectivity estimation techniques. Customized summary statistics for RDF data enable the selectivity estimation of joined triple patterns and the development of efficient heuristics. Using the Lehigh University Benchmark (LUBM), we evaluate the performance of the heuristics for the queries provided by the LUBM and discuss some of them in more details."
]
}
|
1401.4220
|
2474669789
|
We present a proximal quasi-Newton method in which the approximation of the Hessian has the special format of “identity minus rank one” (IMRO) in each iteration. The proposed structure enables us to effectively recover the proximal point. The algorithm is applied to @math -regularized least squares problems arising in many applications including sparse recovery in compressive sensing, machine learning, and statistics. Our numerical experiment suggests that the proposed technique competes favorably with other state-of-the-art solvers for this class of problems. We also provide a complexity analysis for variants of IMRO, showing that it matches known best bounds.
|
Algorithms that rely solely on the function value and the gradient of each iterate are referred to as first-order methods. Due to the large size of the problems arising in compressive sensing, first-order methods are more desirable in sparse recovery. There are numerous gradient-based first-order algorithms proposed for sparse recovery, see for example @cite_27 @cite_5 @cite_36 @cite_49 @cite_33 @cite_19 @cite_53 @cite_9 .
|
{
"cite_N": [
"@cite_33",
"@cite_36",
"@cite_53",
"@cite_9",
"@cite_19",
"@cite_27",
"@cite_49",
"@cite_5"
],
"mid": [
"1996287810",
"2009702064",
"",
"",
"2110505738",
"2109449402",
"2083042020",
"2028349405"
],
"abstract": [
"We propose simple and extremely efficient methods for solving the basis pursuit problem @math which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem @math for given matrix @math and vector @math . We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving @math and @math can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.",
"We present a framework for solving the large-scale @math -regularized convex minimization problem: [ |x |_1+ f(x). ] Our approach is based on two powerful algorithmic ideas: operator-splitting and continuation. Operator-splitting results in a fixed-point algorithm for any given scalar @math ; continuation refers to approximately following the path traced by the optimal value of @math as @math increases. In this paper, we study the structure of optimal solution sets, prove finite convergence for important quantities, and establish @math -linear convergence rates for the fixed-point algorithm applied to problems with @math convex, but not necessarily strictly convex. The continuation framework, motivated by our convergence results, is demonstrated to facilitate the construction of practical algorithms.",
"",
"",
"This paper introduces an expectation-maximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with low-complexity, expressed in the wavelet coefficients, taking advantage of the well known sparsity of wavelet representations. Previous works have investigated wavelet-based restoration but, except for certain special cases, the resulting criteria are solved approximately or require demanding optimization methods. The EM algorithm herein proposed combines the efficient image representation offered by the discrete wavelet transform (DWT) with the diagonalization of the convolution operator obtained in the Fourier domain. Thus, it is a general-purpose approach to wavelet-based image restoration with computational complexity comparable to that of standard wavelet denoising schemes or of frequency domain deconvolution methods. The algorithm alternates between an E-step based on the fast Fourier transform (FFT) and a DWT-based M-step, resulting in an efficient iterative process requiring O(NlogN) operations per iteration. The convergence behavior of the algorithm is investigated, and it is shown that under mild conditions the algorithm converges to a globally optimal restoration. Moreover, our new approach performs competitively with, in some cases better than, the best existing methods in benchmark tests.",
"Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ) error term combined with a sparseness-inducing regularization term. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution, and compressed sensing are a few well-known examples of this approach. This paper proposes gradient projection (GP) algorithms for the bound-constrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method. Computational experiments show that these GP approaches perform well in a wide range of applications, often being significantly faster (in terms of computation time) than competing methods. Although the performance of GP methods tends to degrade as the regularization term is de-emphasized, we show how they can be embedded in a continuation scheme to recover their efficient practical performance.",
"The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determines a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN. We describe a root-finding algorithm for finding arbitrary points on this curve; the algorithm is suitable for problems that are large scale and for those that are in the complex domain. At each iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. Only matrix-vector operations are required. The primal-dual solution of this problem gives function and derivative information needed for the root-finding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.",
"Iterative shrinkage thresholding (1ST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the convergence rate of these 1ST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce two-step 1ST (TwIST) algorithms, exhibiting much faster convergence rate than 1ST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers (lscrP norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples."
]
}
|
1401.4220
|
2474669789
|
We present a proximal quasi-Newton method in which the approximation of the Hessian has the special format of “identity minus rank one” (IMRO) in each iteration. The proposed structure enables us to effectively recover the proximal point. The algorithm is applied to @math -regularized least squares problems arising in many applications including sparse recovery in compressive sensing, machine learning, and statistics. Our numerical experiment suggests that the proposed technique competes favorably with other state-of-the-art solvers for this class of problems. We also provide a complexity analysis for variants of IMRO, showing that it matches known best bounds.
|
In @cite_49 @cite_32 an efficient root finding procedure has been employed for finding the solution of @math through solving a sequence of @math problems. In other words, a sequence of @math problems for different values of @math is solved using a spectral projected gradient method @cite_3 ; and as @math , the solution of the @math problem coincides with the solution of @math . In @cite_33 , the solution of @math problem is recovered through solving a sequence of @math problems with an updated observation vector @math . GPSR @cite_27 is a gradient projection technique for solving the bound constrained QP reformulation of @math .
|
{
"cite_N": [
"@cite_33",
"@cite_32",
"@cite_3",
"@cite_27",
"@cite_49"
],
"mid": [
"1996287810",
"2136660391",
"1973734200",
"2109449402",
"2083042020"
],
"abstract": [
"We propose simple and extremely efficient methods for solving the basis pursuit problem @math which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem @math for given matrix @math and vector @math . We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving @math and @math can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.",
"The basis pursuit technique is used to find a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise fits the least-squares problem only approximately, and a single parameter determines a curve that traces the trade-off between the least-squares fit and the one-norm of the solution. We show that the function that describes this curve is convex and continuously differentiable over all points of interest. The dual solution of a least-squares problem with an explicit one-norm constraint gives function and derivative information needed for a root-finding method. As a result, we can compute arbitrary points on this curve. Numerical experiments demonstrate that our method, which relies on only matrix-vector operations, scales well to large problems.",
"Nonmonotone projected gradient techniques are considered for the minimization of differentiable functions on closed convex sets. The classical projected gradient schemes are extended to include a nonmonotone steplength strategy that is based on the Grippo--Lampariello--Lucidi nonmonotone line search. In particular, the nonmonotone strategy is combined with the spectral gradient choice of steplength to accelerate the convergence process. In addition to the classical projected gradient nonlinear path, the feasible spectral projected gradient is used as a search direction to avoid additional trial projections during the one-dimensional search process. Convergence properties and extensive numerical results are presented.",
"Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ) error term combined with a sparseness-inducing regularization term. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution, and compressed sensing are a few well-known examples of this approach. This paper proposes gradient projection (GP) algorithms for the bound-constrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method. Computational experiments show that these GP approaches perform well in a wide range of applications, often being significantly faster (in terms of computation time) than competing methods. Although the performance of GP methods tends to degrade as the regularization term is de-emphasized, we show how they can be embedded in a continuation scheme to recover their efficient practical performance.",
"The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determines a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN. We describe a root-finding algorithm for finding arbitrary points on this curve; the algorithm is suitable for problems that are large scale and for those that are in the complex domain. At each iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. Only matrix-vector operations are required. The primal-dual solution of this problem gives function and derivative information needed for the root-finding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems."
]
}
|
1401.4220
|
2474669789
|
We present a proximal quasi-Newton method in which the approximation of the Hessian has the special format of “identity minus rank one” (IMRO) in each iteration. The proposed structure enables us to effectively recover the proximal point. The algorithm is applied to @math -regularized least squares problems arising in many applications including sparse recovery in compressive sensing, machine learning, and statistics. Our numerical experiment suggests that the proposed technique competes favorably with other state-of-the-art solvers for this class of problems. We also provide a complexity analysis for variants of IMRO, showing that it matches known best bounds.
|
Many other state-of-the-art algorithms in compressive sensing are inspired by iterative thresholding shrinkage idea @cite_44 @cite_34 @cite_38 . ISTA (iterative shrinkage thresholding algorithm) is an extension of the steepest descent idea to composite functions using the thresholding operator. Recall that in the steepest descent method, the general format of the generated sequence is x^ k+1 = x - f (x^k), which might be considered as the solution to the following quadratic approximation of @math : x^ k+1 = f(x^k)+ x-x^k, f(x^k) + 1 2 |x-x^k |^2 .
|
{
"cite_N": [
"@cite_44",
"@cite_38",
"@cite_34"
],
"mid": [
"1967020502",
"2006262045",
""
],
"abstract": [
"The notion of soft thresholding plays a central role in problems from various areas of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convex-analytical tools, we extend this notion to that of proximal thresholding and investigate its properties, providing, in particular, several characterizations of such thresholders. We then propose a versatile convex variational formulation for optimization over orthonormal bases that covers a wide range of problems, and we establish the strong convergence of a proximal thresholding algorithm to solve it. Numerical applications to signal recovery are demonstrated.",
"We show that various inverse problems in signal recovery can be formulated as the generic problem of minimizing the sum of two convex functions with certain regularity properties. This formulation makes it possible to derive existence, uniqueness, characterization, and stability results in a unified and standardized fashion for a large class of apparently disparate problems. Recent results on monotone operator splitting methods are applied to establish the convergence of a forward-backward algorithm to solve the generic problem. In turn, we recover, extend, and provide a simplified analysis for a variety of existing iterative methods. Applications to geometry texture image decomposition schemes are also discussed. A novelty of our framework is to use extensively the notion of a proximity operator, which was introduced by Moreau in the 1960s.",
""
]
}
|
1401.4220
|
2474669789
|
We present a proximal quasi-Newton method in which the approximation of the Hessian has the special format of “identity minus rank one” (IMRO) in each iteration. The proposed structure enables us to effectively recover the proximal point. The algorithm is applied to @math -regularized least squares problems arising in many applications including sparse recovery in compressive sensing, machine learning, and statistics. Our numerical experiment suggests that the proposed technique competes favorably with other state-of-the-art solvers for this class of problems. We also provide a complexity analysis for variants of IMRO, showing that it matches known best bounds.
|
Shuffling the linear and quadratic terms and ignoring the constants in , it can equivalently be written as x^ k+1 = 1 2 |x-(x^k- f(x^k) |^2 +p(x) . Using the notion of @math operator we conclude that x^ k+1 = p (x^k- f(x^k) ). The iterative scheme of is the generalized gradient method" or proximal gradient method". Note that it actually coincides with steepest descent method in the absence of @math . It is also sometimes called Forward-backward Splitting Method" @cite_38 @cite_12 . It gets its name from the two separate stages during each iteration while minimizing ; the first stage is taking a forward step @math involving only @math , and the second stage is a backward step @math which involves only @math . Finding the proximal point may not be a trivial task in general, but for solving @math it can be computed efficiently because the @math -norm is separable.
|
{
"cite_N": [
"@cite_38",
"@cite_12"
],
"mid": [
"2006262045",
"2951870246"
],
"abstract": [
"We show that various inverse problems in signal recovery can be formulated as the generic problem of minimizing the sum of two convex functions with certain regularity properties. This formulation makes it possible to derive existence, uniqueness, characterization, and stability results in a unified and standardized fashion for a large class of apparently disparate problems. Recent results on monotone operator splitting methods are applied to establish the convergence of a forward-backward algorithm to solve the generic problem. In turn, we recover, extend, and provide a simplified analysis for a variety of existing iterative methods. Applications to geometry texture image decomposition schemes are also discussed. A novelty of our framework is to use extensively the notion of a proximity operator, which was introduced by Moreau in the 1960s.",
"The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. These proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed."
]
}
|
1401.4220
|
2474669789
|
We present a proximal quasi-Newton method in which the approximation of the Hessian has the special format of “identity minus rank one” (IMRO) in each iteration. The proposed structure enables us to effectively recover the proximal point. The algorithm is applied to @math -regularized least squares problems arising in many applications including sparse recovery in compressive sensing, machine learning, and statistics. Our numerical experiment suggests that the proposed technique competes favorably with other state-of-the-art solvers for this class of problems. We also provide a complexity analysis for variants of IMRO, showing that it matches known best bounds.
|
The alternating direction method (ADM) is also a technique that can be applied to @math , see @cite_17 @cite_10 and references therein. It is suited for minimizing the summation of (separable) convex functions, say @math , over a linear set of constraints. The augmented Lagrangian technique then solves for @math and @math alternately while fixing the other variable. The alternating linearization method (ALM) @cite_18 also applies to minimizing composite functions. In , we linearize @math at every iteration to build the quadratic approximation model; in ALM a similar model based on @math is also minimized at every iteration. Nesterov's accelerated technique has also been adopted, and the resulting algorithm is called FALM, for fast ALM.
|
{
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_17"
],
"mid": [
"2950283609",
"",
"2163973643"
],
"abstract": [
"We present in this paper first-order alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most @math iterations to obtain an @math -optimal solution, while our accelerated (i.e., fast) versions of them require at most @math iterations, with little change in the computational effort required at each iteration. For both types of methods, we present one algorithm that requires both functions to be smooth with Lipschitz continuous gradients and one algorithm that needs only one of the functions to be so. Algorithms in this paper are Gauss-Seidel type methods, in contrast to the ones proposed by Goldfarb and Ma in [21] where the algorithms are Jacobi type methods. Numerical results are reported to support our theoretical conclusions and demonstrate the practical potential of our algorithms.",
"",
"Recent compressive sensing results show that it is possible to accurately reconstruct certain compressible signals from relatively few linear measurements via solving nonsmooth convex optimization problems. In this paper, we propose the use of the alternating direction method - a classic approach for optimization problems with separable variables (D. Gabay and B. Mercier, ?A dual algorithm for the solution of nonlinear variational problems via finite-element approximations,? Computer and Mathematics with Applications, vol. 2, pp. 17-40, 1976; R. Glowinski and A. Marrocco, ?Sur lapproximation par elements finis dordre un, et la resolution par penalisation-dualite dune classe de problemes de Dirichlet nonlineaires,? Rev. Francaise dAut. Inf. Rech. Oper., vol. R-2, pp. 41-76, 1975) - for signal reconstruction from partial Fourier (i.e., incomplete frequency) measurements. Signals are reconstructed as minimizers of the sum of three terms corresponding to total variation, ?1-norm of a certain transform, and least squares data fitting. Our algorithm, called RecPF and published online, runs very fast (typically in a few seconds on a laptop) because it requires a small number of iterations, each involving simple shrinkages and two fast Fourier transforms (or alternatively discrete cosine transforms when measurements are in the corresponding domain). RecPF was compared with two state-of-the-art algorithms on recovering magnetic resonance images, and the results show that it is highly efficient, stable, and robust."
]
}
|
1401.4220
|
2474669789
|
We present a proximal quasi-Newton method in which the approximation of the Hessian has the special format of “identity minus rank one” (IMRO) in each iteration. The proposed structure enables us to effectively recover the proximal point. The algorithm is applied to @math -regularized least squares problems arising in many applications including sparse recovery in compressive sensing, machine learning, and statistics. Our numerical experiment suggests that the proposed technique competes favorably with other state-of-the-art solvers for this class of problems. We also provide a complexity analysis for variants of IMRO, showing that it matches known best bounds.
|
In order to incorporate more information about the function without trading off the efficiency of the algorithms, Newton quasi-Newton proximal methods @cite_1 @cite_30 have attracted researchers quite recently. Most of previous extensions on quasi-Newton methods are suited either for nonsmooth problems @cite_42 @cite_0 , or for constrained problems with simple enough constraints @cite_7 @cite_6 @cite_21 .
|
{
"cite_N": [
"@cite_30",
"@cite_7",
"@cite_42",
"@cite_1",
"@cite_21",
"@cite_6",
"@cite_0"
],
"mid": [
"2008634296",
"2000359198",
"2151568819",
"2953167455",
"2028912194",
"2118163005",
"2157711174"
],
"abstract": [
"We generalize Newton-type methods for minimizing smooth functions to handle a sum of two convex functions: a smooth function and a nonsmooth function with a simple proximal mapping. We show that the resulting proximal Newton-type methods inherit the desirable convergence behavior of Newton-type methods for minimizing smooth functions, even when search directions are computed inexactly. Many popular methods tailored to problems arising in bioinformatics, signal processing, and statistical learning are special cases of proximal Newton-type methods, and our analysis yields new convergence results for some of these methods.",
"An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited memory BFGS matrix to approximate the Hessian of the objective function. It is shown how to take advantage of the form of the limited memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.",
"We investigate the behavior of quasi-Newton algorithms applied to minimize a nonsmooth function f, not necessarily convex. We introduce an inexact line search that generates a sequence of nested intervals containing a set of points of nonzero measure that satisfy the Armijo and Wolfe conditions if f is absolutely continuous along the line. Furthermore, the line search is guaranteed to terminate if f is semi-algebraic. It seems quite difficult to establish a convergence theorem for quasi-Newton methods applied to such general classes of functions, so we give a careful analysis of a special but illuminating case, the Euclidean norm, in one variable using the inexact line search and in two variables assuming that the line search is exact. In practice, we find that when f is locally Lipschitz and semi-algebraic with bounded sublevel sets, the BFGS (Broyden–Fletcher–Goldfarb–Shanno) method with the inexact line search almost always generates sequences whose cluster points are Clarke stationary and with function values converging R-linearly to a Clarke stationary value. We give references documenting the successful use of BFGS in a variety of nonsmooth applications, particularly the design of low-order controllers for linear dynamical systems. We conclude with a challenging open question.",
"A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and leads to an elegant quasi-Newton method. The optimization method compares favorably against state-of-the-art alternatives. The algorithm has extensive applications including signal processing, sparse recovery and machine learning and classification.",
"Numerous scientific applications across a variety of fields depend on box-constrained convex optimization. Box-constrained problems therefore continue to attract research interest. We address box-constrained (strictly convex) problems by deriving two new quasi-Newton algorithms. Our algorithms are positioned between the projected-gradient [J. B. Rosen, J. SIAM, 8 (1960), pp. 181-217] and projected-Newton [D. P. Bertsekas, SIAM J. Control Optim., 20 (1982), pp. 221-246] methods. We also prove their convergence under a simple Armijo step-size rule. We provide experimental results for two particular box-constrained problems: nonnegative least squares (NNLS), and nonnegative Kullback-Leibler (NNKL) minimization. For both NNLS and NNKL our algorithms perform competitively as compared to well-established methods on medium-sized problems; for larger problems our approach frequently outperforms the competition.",
"An optimization algorithm for minimizing a smooth function over a convex set is described. Each iteration of the method computes a descent direction by minimizing, over the original constraints, a diagonal plus lowrank quadratic approximation to the function. The quadratic approximation is constructed using a limited-memory quasi-Newton update. The method is suitable for large-scale problems where evaluation of the function is substantially more expensive than projection onto the constraint set. Numerical experiments on one-norm regularized test problems indicate that the proposed method is competitive with state-of-the-art methods such as boundconstrained L-BFGS and orthant-wise descent. We further show that the method generalizes to a wide class of problems, and substantially improves on state-of-the-art methods for problems such as learning the structure of Gaussian graphical models and Markov random elds.",
"We extend the well-known BFGS quasi-Newton method and its memory-limited variant LBFGS to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: the local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We prove that under some technical conditions, the resulting subBFGS algorithm is globally convergent in objective function value. We apply its memory-limited variant (subLBFGS) to L2-regularized risk minimization with the binary hinge loss. To extend our algorithm to the multiclass and multilabel settings, we develop a new, efficient, exact line search algorithm. We prove its worst-case time complexity bounds, and show that our line search can also be used to extend a recently developed bundle method to the multiclass and multilabel settings. We also apply the direction-finding component of our algorithm to L1-regularized risk minimization with logistic loss. In all these contexts our methods perform comparable to or better than specialized state-of-the-art solvers on a number of publicly available data sets. An open source implementation of our algorithms is freely available."
]
}
|
1401.4209
|
2530335614
|
This paper studies the minimal controllability problem(MCP), i.e., the problem of, given a linear time invariant system, finding the sparsest input vector tha t ensures system’s controllability. We show that the MCP is NP-complete, when the eigenvalues of the system dynamics matrix are simple. This is achieved by reducing the MCP to a set covering problem. In addition, the approximated solutions to the set covering problem lead to feasible (but sub-optimal) solutions to the MCP. Further, we analyze the relation of the MCP with its structural counterpart, the minimal structural controllability problem (MSCP) which is known to admit a polynomial complexity solution procedure. In fact, we conclude that the MCP is almost polynomial (P) in the sense the MCP for which we cannot show that is P (but NPcomplete) has zero Lesbegue measure. Finally, we provide an illustrative example where the solution to the MCP is found using the main results and reductions developed in this paper and posteriorly compared with the solution of the MSCP.
|
Alternatively, in @cite_23 instead of determining the sparsest input matrix ensuring the controllability, the aim is to determine the sparsest input matrix that ensures , which we refer to as the (MSCP) -- see for formal definitions and problem statement. Briefly, the MSCP focus on the structure of the dynamics, i.e., the location of zeros nonzeros, and the obtained sparsest input matrix is such that for almost all matrices satisfying the structure of the dynamics and the input matrix, the system is controllable @cite_13 . Finally, in the present paper we provide an example where the solution to the minimal structural controllability problem is not necessarily a solution to the minimal controllability problem when the dynamic matrix is simple; hence, .
|
{
"cite_N": [
"@cite_13",
"@cite_23"
],
"mid": [
"1971604476",
"1930354283"
],
"abstract": [
"In this survey paper, we consider linear structured systems in state space form, where a linear system is structured when each entry of its matrices, like A,B,C and D, is either a fixed zero or a free parameter. The location of the fixed zeros in these matrices constitutes the structure of the system. Indeed a lot of man-made physical systems which admit a linear model are structured. A structured system is representative of a class of linear systems in the usual sense. It is of interest to investigate properties of structured systems which are true for almost any value of the free parameters, therefore also called generic properties. Interestingly, a lot of classical properties of linear systems can be studied in terms of genericity. Moreover, these generic properties can, in general, be checked by means of directed graphs that can be associated to a structured system in a natural way. We review here a number of results concerning generic properties of structured systems expressed in graph theoretic terms. By properties we mean here system-specific properties like controllability, the finite and infinite zero structure, and so on, as well as, solvability issues of certain classical control problems like disturbance rejection, input-output decoupling, and so on. In this paper, we do not try to be exhaustive but instead, by a selection of results, we would like to motivate the reader to appreciate what we consider as a wonderful modelling and analysis tool. We emphasize the fact that this modelling technique allows us to get a number of important results based on poor information on the system only. Moreover, the graph theoretic conditions are intuitive and are easy to check by hand for small systems and by means of well-known polynomially bounded combinatorial techniques for larger systems.",
"This paper addresses problems on the structural design of large-scale control systems. An efficient and unified framework is proposed to select the minimum number of manipulated measured variables to achieve structural controllability observability of the system, and to select the minimum number of feedback interconnections between measured and manipulated variables such that the closed-loop system has no structural fixed modes. Global solutions are computed using polynomial complexity algorithms in the number of the state variables of the system. Finally, graph-theoretic characterizations are proposed, which allow a characterization of all possible solutions."
]
}
|
1401.3768
|
2155718519
|
With the many benefits of cloud computing, an entity may want to outsource its data and their related analytics tasks to a cloud. When data are sensitive, it is in the interest of the entity to outsource encrypted data to the cloud; however, this limits the types of operations that can be performed on the cloud side. Especially, evaluating queries over the encrypted data stored on the cloud without the entity performing any computation and without ever decrypting the data become a very challenging problem. In this paper, we propose solutions to conduct range queries over outsourced encrypted data. The existing methods leak valuable information to the cloud which can violate the security guarantee of the underlying encryption schemes. In general, the main security primitive used to evaluate range queries is secure comparison (SC) of encrypted integers. However, we observe that the existing SC protocols are not very efficient. To this end, we first propose a novel SC scheme that takes encrypted integers and outputs encrypted comparison result. We empirically show its practical advantage over the current state-of-the-art. We then utilize the proposed SC scheme to construct two new secure range query protocols. Our protocols protect data confidentiality, privacy of user's query, and also preserve the semantic security of the encrypted data; therefore, they are more secure than the existing protocols. Furthermore, our second protocol is lightweight at the user end, and it can allow an authorized user to use any device with limited storage and computing capability to perform the range queries over outsourced encrypted data.
|
A different but closely related work to querying on encrypted data is keyword search on encrypted data''. The main goal of this problem is to retrieve the set of encrypted files stored on a remote server (such as the cloud) that match the user's input keywords. Along this direction, much work has been published based on searchable encryption schemes (e.g., @cite_53 @cite_14 @cite_25 @cite_1 ). However, these works mostly concentrate on protecting data confidentiality and they do not protect data access patterns. Though some recent works addressed the issue of protecting access patterns while searching for keywords @cite_45 @cite_26 , at this point, it is not clear how their work can be mapped to range queries which is an entirely different and complex problem than simple exact matching.
|
{
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_53",
"@cite_1",
"@cite_45",
"@cite_25"
],
"mid": [
"2114597779",
"2032127253",
"2108590723",
"1713648379",
"",
"2116717571"
],
"abstract": [
"Working in various service models ranging from SaaS, PaaS, to IaaS, cloud computing is viewed by many as anew revolution in IT, and could reshape the paradigm of how IT industry works today. Storage services are a fundamental function offered by cloud computing. A typical use scenario of the cloud storage services is that an enterprise outsources its database to the cloud and authorizes multiple users to access the database. In such cases of database outsourcing, data encryption is a good approach enabling the data owner to retain its control over the outsourced data. Searchable encryption is a cryptographic primitive allowing for private keyword based search over the encrypted database. The above setting of enterprise outsourcing database to the cloud requires multi-user searchable encryption, whereas virtually all existing schemes consider the single-user setting. To bridge this gap, we are motivated to propose a practical multi-user searchable encryption scheme, which has a number of advantages over the known approaches. The associated model and security requirements are also formulated.",
"In recent years, due to the appealing features of cloud computing, large amount of data have been stored in the cloud. Although cloud based services offer many advantages, privacy and security of the sensitive data is a big concern. To mitigate the concerns, it is desirable to outsource sensitive data in encrypted form. Encrypted storage protects the data against illegal access, but it complicates some basic, yet important functionality such as the search on the data. To achieve search over encrypted data without compromising the privacy, considerable amount of searchable encryption schemes have been proposed in the literature. However, almost all of them handle exact query matching but not similarity matching, a crucial requirement for real world applications. Although some sophisticated secure multi-party computation based cryptographic techniques are available for similarity tests, they are computationally intensive and do not scale for large data sources. In this paper, we propose an efficient scheme for similarity search over encrypted data. To do so, we utilize a state-of-the-art algorithm for fast near neighbor search in high dimensional spaces called locality sensitive hashing. To ensure the confidentiality of the sensitive data, we provide a rigorous security definition and prove the security of the proposed scheme under the provided definition. In addition, we provide a real world application of the proposed scheme and verify the theoretical results with empirical observations on a real dataset.",
"As Cloud Computing becomes prevalent, sensitive information are being increasingly centralized into the cloud. For the protection of data privacy, sensitive data has to be encrypted before outsourcing, which makes effective data utilization a very challenging task. Although traditional searchable encryption schemes allow users to securely search over encrypted data through keywords, these techniques support only boolean search, without capturing any relevance of data files. This approach suffers from two main drawbacks when directly applied in the context of Cloud Computing. On the one hand, users, who do not necessarily have pre-knowledge of the encrypted cloud data, have to post process every retrieved file in order to find ones most matching their interest, On the other hand, invariably retrieving all files containing the queried keyword further incurs unnecessary network traffic, which is absolutely undesirable in today's pay-as-you-use cloud paradigm. In this paper, for the first time we define and solve the problem of effective yet secure ranked keyword search over encrypted cloud data. Ranked search greatly enhances system usability by returning the matching files in a ranked order regarding to certain relevance criteria (e.g., keyword frequency), thus making one step closer towards practical deployment of privacy-preserving data hosting services in Cloud Computing. We first give a straightforward yet ideal construction of ranked keyword search under the state-of-the-art searchable symmetric encryption (SSE) security definition, and demonstrate its inefficiency. To achieve more practical performance, we then propose a definition for ranked searchable symmetric encryption, and give an efficient design by properly utilizing the existing cryptographic primitive, order-preserving symmetric encryption (OPSE). Thorough analysis shows that our proposed solution enjoys as-strong-as-possible\" security guarantee compared to previous SSE schemes, while correctly realizing the goal of ranked keyword search. Extensive experimental results demonstrate the efficiency of the proposed solution.",
"We present PRISM, a privacy-preserving scheme for word search in cloud computing. In the face of a curious cloud provider, the main challenge is to design a scheme that achieves privacy while preserving the efficiency of cloud computing. Solutions from related research, like encrypted keyword search or Private Information Retrieval (PIR), fall short of meeting real-world cloud requirements and are impractical. PRISM 's idea is to transform the problem of word search into a set of parallel instances of PIR on small datasets. Each PIR instance on a small dataset is efficiently solved by a node in the cloud during the \"Map\" phase of MapReduce. Outcomes of map computations are then aggregated during the \"Reduce\" phase. Due to the linearity of PRISM, the simple aggregation of map results yields the final output of the word search operation. We have implemented PRISM on Hadoop MapReduce and evaluated its efficiency using real-world DNS logs. PRISM's overhead over non-private search is only 11 . Thus, PRISM offers privacy-preserving search that meets cloud computing efficiency requirements. Moreover, PRISM is compatible with standard MapReduce, not requiring any change to the interface or infrastructure.",
"",
"With the advent of cloud computing, data owners are motivated to outsource their complex data management systems from local sites to the commercial public cloud for great flexibility and economic savings. But for protecting data privacy, sensitive data has to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. Thus, enabling an encrypted cloud data search service is of paramount importance. Considering the large number of data users and documents in the cloud, it is necessary to allow multiple keywords in the search request and return documents in the order of their relevance to these keywords. Related works on searchable encryption focus on single keyword search or Boolean keyword search, and rarely sort the search results. In this paper, for the first time, we define and solve the challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE).We establish a set of strict privacy requirements for such a secure cloud data utilization system. Among various multi-keyword semantics, we choose the efficient similarity measure of “coordinate matching”, i.e., as many matches as possible, to capture the relevance of data documents to the search query. We further use “inner product similarity” to quantitatively evaluate such similarity measure. We first propose a basic idea for the MRSE based on secure inner product computation, and then give two significantly improved MRSE schemes to achieve various stringent privacy requirements in two different threat models. Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given. Experiments on the real-world dataset further show proposed schemes indeed introduce low overhead on computation and communication."
]
}
|
1401.3768
|
2155718519
|
With the many benefits of cloud computing, an entity may want to outsource its data and their related analytics tasks to a cloud. When data are sensitive, it is in the interest of the entity to outsource encrypted data to the cloud; however, this limits the types of operations that can be performed on the cloud side. Especially, evaluating queries over the encrypted data stored on the cloud without the entity performing any computation and without ever decrypting the data become a very challenging problem. In this paper, we propose solutions to conduct range queries over outsourced encrypted data. The existing methods leak valuable information to the cloud which can violate the security guarantee of the underlying encryption schemes. In general, the main security primitive used to evaluate range queries is secure comparison (SC) of encrypted integers. However, we observe that the existing SC protocols are not very efficient. To this end, we first propose a novel SC scheme that takes encrypted integers and outputs encrypted comparison result. We empirically show its practical advantage over the current state-of-the-art. We then utilize the proposed SC scheme to construct two new secure range query protocols. Our protocols protect data confidentiality, privacy of user's query, and also preserve the semantic security of the encrypted data; therefore, they are more secure than the existing protocols. Furthermore, our second protocol is lightweight at the user end, and it can allow an authorized user to use any device with limited storage and computing capability to perform the range queries over outsourced encrypted data.
|
As an alternative, in the past few years, researchers have been focusing on searchable public key encryption schemes by leveraging cryptographic techniques. Along this direction, in particular to range queries, some earlier works @cite_33 @cite_15 were partly successful in addressing the PPRQ problem. However, as mentioned in @cite_20 , these methods are susceptible to value-localization problem; therefore, they are not secure. In addition, they leak data access patterns to the server. Recently, @cite_20 developed a new multi-dimensional PPRQ protocol by securely generating index tags for the data using bucketization techniques. However, their method is susceptible to access pattern attacks (this issue was also mentioned as a drawback in @cite_20 ) and false positives in the returned set of records. More specifically, the final set of records has to be weeded by the client to remove false positives which incurs computational overhead on the client side. In addition, since the bucket labels are revealed to the server, we believe that their method may lead to unwanted information leakage.
|
{
"cite_N": [
"@cite_15",
"@cite_33",
"@cite_20"
],
"mid": [
"2154448764",
"1589843374",
"2003500686"
],
"abstract": [
"We design an encryption scheme called multi-dimensional range query over encrypted data (MRQED), to address the privacy concerns related to the sharing of network audit logs and various other applications. Our scheme allows a network gateway to encrypt summaries of network flows before submitting them to an untrusted repository. When network intrusions are suspected, an authority can release a key to an auditor, allowing the auditor to decrypt flows whose attributes (e.g., source and destination addresses, port numbers, etc.) fall within specific ranges. However, the privacy of all irrelevant flows are still preserved. We formally define the security for MRQED and prove the security of our construction under the decision bilinear Diffie-Hellman and decision linear assumptions in certain bilinear groups. We study the practical performance of our construction in the context of network audit logs. Apart from network audit logs, our scheme also has interesting applications for financial audit logs, medical privacy, untrusted remote storage, etc. In particular, we show that MRQED implies a solution to its dual problem, which enables investors to trade stocks through a broker in a privacy-preserving manner.",
"We construct public-key systems that support comparison queries (x ≥ a) on encrypted data as well as more general queries such as subset queries (x∈ S). Furthermore, these systems support arbitrary conjunctive queries (P1 ∧ ... ∧ Pl) without leaking information on individual conjuncts. We present a general framework for constructing and analyzing public-key systems supporting queries on encrypted data.",
"In this paper, we study the problem of supporting multidimensional range queries on encrypted data. The problem is motivated by secure data outsourcing applications where a client may store his her data on a remote server in encrypted form and want to execute queries using server's computational capabilities. The solution approach is to compute a secure indexing tag of the data by applying bucketization (a generic form of data partitioning) which prevents the server from learning exact values but still allows it to check if a record satisfies the query predicate. Queries are evaluated in an approximate manner where the returned set of records may contain some false positives. These records then need to be weeded out by the client which comprises the computational overhead of our scheme. We develop a bucketization procedure for answering multidimensional range queries on multidimensional data. For a given bucketization scheme, we derive cost and disclosure-risk metrics that estimate client's computational overhead and disclosure risk respectively. Given a multidimensional dataset, its bucketization is posed as an optimization problem where the goal is to minimize the risk of disclosure while keeping query cost (client's computational overhead) below a certain user-specified threshold value. We provide a tunable data bucketization algorithm that allows the data owner to control the trade-off between disclosure risk and cost. We also study the trade-off characteristics through an extensive set of experiments on real and synthetic data."
]
}
|
1401.3768
|
2155718519
|
With the many benefits of cloud computing, an entity may want to outsource its data and their related analytics tasks to a cloud. When data are sensitive, it is in the interest of the entity to outsource encrypted data to the cloud; however, this limits the types of operations that can be performed on the cloud side. Especially, evaluating queries over the encrypted data stored on the cloud without the entity performing any computation and without ever decrypting the data become a very challenging problem. In this paper, we propose solutions to conduct range queries over outsourced encrypted data. The existing methods leak valuable information to the cloud which can violate the security guarantee of the underlying encryption schemes. In general, the main security primitive used to evaluate range queries is secure comparison (SC) of encrypted integers. However, we observe that the existing SC protocols are not very efficient. To this end, we first propose a novel SC scheme that takes encrypted integers and outputs encrypted comparison result. We empirically show its practical advantage over the current state-of-the-art. We then utilize the proposed SC scheme to construct two new secure range query protocols. Our protocols protect data confidentiality, privacy of user's query, and also preserve the semantic security of the encrypted data; therefore, they are more secure than the existing protocols. Furthermore, our second protocol is lightweight at the user end, and it can allow an authorized user to use any device with limited storage and computing capability to perform the range queries over outsourced encrypted data.
|
@cite_29 proposed a new technique for protecting confidentiality as well as access patterns to the data in outsourced environments. Their technique is based on constructing shuffled index structures using B+-trees. In order to hide the access patterns, their method introduces fake searches in conjunction with the actual index value to be searched. We emphasize that their work solves a different problem - mainly how to securely outsource the index and then obliviously search over this data structure. Their technique has a straight-forward application to keyword search over encrypted data since it deals with exact matching. However, at this point, it is not clear how their work can be extended for range queries that require implicit comparison operations to be performed in a secure manner.
|
{
"cite_N": [
"@cite_29"
],
"mid": [
"2105751742"
],
"abstract": [
"As the use of external storage and data processing services for storing and managing sensitive data becomes more and more common, there is an increasing need for novel techniques that support not only data confidentiality, but also confidentiality of the accesses that users make on such data. In this paper, we propose a technique for guaranteeing content, access, and pattern confidentiality in the data outsourcing scenario. The proposed technique introduces a shuffle index structure, which adapts traditional B+-trees. We show that our solution exhibits a limited performance cost, thus resulting effectively usable in practice."
]
}
|
1401.3768
|
2155718519
|
With the many benefits of cloud computing, an entity may want to outsource its data and their related analytics tasks to a cloud. When data are sensitive, it is in the interest of the entity to outsource encrypted data to the cloud; however, this limits the types of operations that can be performed on the cloud side. Especially, evaluating queries over the encrypted data stored on the cloud without the entity performing any computation and without ever decrypting the data become a very challenging problem. In this paper, we propose solutions to conduct range queries over outsourced encrypted data. The existing methods leak valuable information to the cloud which can violate the security guarantee of the underlying encryption schemes. In general, the main security primitive used to evaluate range queries is secure comparison (SC) of encrypted integers. However, we observe that the existing SC protocols are not very efficient. To this end, we first propose a novel SC scheme that takes encrypted integers and outputs encrypted comparison result. We empirically show its practical advantage over the current state-of-the-art. We then utilize the proposed SC scheme to construct two new secure range query protocols. Our protocols protect data confidentiality, privacy of user's query, and also preserve the semantic security of the encrypted data; therefore, they are more secure than the existing protocols. Furthermore, our second protocol is lightweight at the user end, and it can allow an authorized user to use any device with limited storage and computing capability to perform the range queries over outsourced encrypted data.
|
We may ask if we can use fully homomorphic cryptosystems (e.g., @cite_30 ) which can perform arbitrary computations over encrypted data without ever decrypting them. However, such techniques are very expensive and their usage in practical applications have yet to be explored. For example, it was shown in @cite_13 that even for weak security parameters one bootstrapping'' operation of the homomorphic operation would take at least 30 seconds on a high performance machine.
|
{
"cite_N": [
"@cite_30",
"@cite_13"
],
"mid": [
"2031533839",
"17575016"
],
"abstract": [
"We propose a fully homomorphic encryption scheme -- i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result -- that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable. Next, we describe a public key encryption scheme using ideal lattices that is almost bootstrappable. Lattice-based cryptosystems typically have decryption algorithms with low circuit complexity, often dominated by an inner product computation that is in NC1. Also, ideal lattices provide both additive and multiplicative homomorphisms (modulo a public-key ideal in a polynomial ring that is represented as a lattice), as needed to evaluate general circuits. Unfortunately, our initial scheme is not quite bootstrappable -- i.e., the depth that the scheme can correctly evaluate can be logarithmic in the lattice dimension, just like the depth of the decryption circuit, but the latter is greater than the former. In the final step, we show how to modify the scheme to reduce the depth of the decryption circuit, and thereby obtain a bootstrappable encryption scheme, without reducing the depth that the scheme can evaluate. Abstractly, we accomplish this by enabling the encrypter to start the decryption process, leaving less work for the decrypter, much like the server leaves less work for the decrypter in a server-aided cryptosystem.",
"We describe a working implementation of a variant of Gentry's fully homomorphic encryption scheme (STOC 2009), similar to the variant used in an earlier implementation effort by Smart and Vercauteren (PKC 2010). Smart and Vercauteren implemented the underlying \"somewhat homomorphic\" scheme, but were not able to implement the bootstrapping functionality that is needed to get the complete scheme to work. We show a number of optimizations that allow us to implement all aspects of the scheme, including the bootstrapping functionality. Our main optimization is a key-generation method for the underlying somewhat homomorphic encryption, that does not require full polynomial inversion. This reduces the asymptotic complexity from O(n2.5) to O(n1.5) when working with dimension-n lattices (and practically reducing the time from many hours days to a few seconds minutes). Other optimizations include a batching technique for encryption, a careful analysis of the degree of the decryption polynomial, and some space time trade-offs for the fully-homomorphic scheme. We tested our implementation with lattices of several dimensions, corresponding to several security levels. From a \"toy\" setting in dimension 512, to \"small,\" \"medium,\" and \"large\" settings in dimensions 2048, 8192, and 32768, respectively. The public-key size ranges in size from 70 Megabytes for the \"small\" setting to 2.3 Gigabytes for the \"large\" setting. The time to run one bootstrapping operation (on a 1-CPU 64- bit machine with large memory) ranges from 30 seconds for the \"small\" setting to 30 minutes for the \"large\" setting."
]
}
|
1401.3768
|
2155718519
|
With the many benefits of cloud computing, an entity may want to outsource its data and their related analytics tasks to a cloud. When data are sensitive, it is in the interest of the entity to outsource encrypted data to the cloud; however, this limits the types of operations that can be performed on the cloud side. Especially, evaluating queries over the encrypted data stored on the cloud without the entity performing any computation and without ever decrypting the data become a very challenging problem. In this paper, we propose solutions to conduct range queries over outsourced encrypted data. The existing methods leak valuable information to the cloud which can violate the security guarantee of the underlying encryption schemes. In general, the main security primitive used to evaluate range queries is secure comparison (SC) of encrypted integers. However, we observe that the existing SC protocols are not very efficient. To this end, we first propose a novel SC scheme that takes encrypted integers and outputs encrypted comparison result. We empirically show its practical advantage over the current state-of-the-art. We then utilize the proposed SC scheme to construct two new secure range query protocols. Our protocols protect data confidentiality, privacy of user's query, and also preserve the semantic security of the encrypted data; therefore, they are more secure than the existing protocols. Furthermore, our second protocol is lightweight at the user end, and it can allow an authorized user to use any device with limited storage and computing capability to perform the range queries over outsourced encrypted data.
|
As an independent work, @cite_3 developed a new prototype to execute SQL queries by leveraging server-hosted tamper-proof trusted hardware in critical query processing stages. However, their work still reveals data access patterns to the server. Recently, @cite_11 proposed a new PPRQ protocol by utilizing the secure comparison (SC) protocol in @cite_2 as the building block. Perhaps, their method is the most closely related work to the protocols proposed in this paper. However, the SC protocol in @cite_2 operates on encrypted bits rather than on encrypted integers; therefore, the overall throughput in their protocol is less. In addition, their protocol leaks data access patterns to the cloud service provider.
|
{
"cite_N": [
"@cite_2",
"@cite_3",
"@cite_11"
],
"mid": [
"2104145890",
"2042012257",
"1998853617"
],
"abstract": [
"We consider the problem of securely evaluating the Greater Than (GT) predicate and its extension - transferring one of two secrets, depending on the result of comparison. We generalize our solutions and show how to securely decide membership in the union of a set of intervals. We then consider the related problem of comparing two encrypted numbers. We show how to efficiently apply our solutions to practical settings, such as auctions with the semi-honest auctioneer, proxy selling, etc. All of our protocols are one round. We propose new primitives, Strong Conditional Oblivious Transfer (SCOT) and Conditional En- crypted Mapping (CEM), which capture common security properties of one round protocols in a variety of settings, which may be of independent interest.",
"TrustedDB is an outsourced database prototype that allows clients to execute SQL queries with privacy and under regulatory compliance constraints without having to trust the service provider. TrustedDB achieves this by leveraging server-hosted tamper-proof trusted hardware in critical query processing stages. TrustedDB does not limit the query expressiveness of supported queries. And, despite the cost overhead and performance limitations of trusted hardware, the costs per query are orders of magnitude lower than any (existing or) potential future software-only mechanisms. TrustedDB is built and runs on actual hardware, and its performance and costs are evaluated here.",
"With the growing popularity of data and service outsourcing, where the data resides on remote servers in encrypted form, there remain open questions about what kind of query operations can be performed on the encrypted data. In this paper, we focus on one such important query operation, namely range query. One of the basic security primitive that can be used to evaluate range queries is secure comparison of encrypted integers. However, the existing secure comparison protocols strongly rely on the encrypted bit-wise representations rather than on pure encrypted integers. Therefore, in this paper, we first propose an efficient method for converting an encrypted integer z into encryptions of the individual bits of z. We then utilize the proposed security primitive to construct a new protocol for secure evaluation of range queries in the cloud computing environment. Furthermore, we empirically show the efficiency gains of using our security primitive over existing method under the range query application."
]
}
|
1401.3768
|
2155718519
|
With the many benefits of cloud computing, an entity may want to outsource its data and their related analytics tasks to a cloud. When data are sensitive, it is in the interest of the entity to outsource encrypted data to the cloud; however, this limits the types of operations that can be performed on the cloud side. Especially, evaluating queries over the encrypted data stored on the cloud without the entity performing any computation and without ever decrypting the data become a very challenging problem. In this paper, we propose solutions to conduct range queries over outsourced encrypted data. The existing methods leak valuable information to the cloud which can violate the security guarantee of the underlying encryption schemes. In general, the main security primitive used to evaluate range queries is secure comparison (SC) of encrypted integers. However, we observe that the existing SC protocols are not very efficient. To this end, we first propose a novel SC scheme that takes encrypted integers and outputs encrypted comparison result. We empirically show its practical advantage over the current state-of-the-art. We then utilize the proposed SC scheme to construct two new secure range query protocols. Our protocols protect data confidentiality, privacy of user's query, and also preserve the semantic security of the encrypted data; therefore, they are more secure than the existing protocols. Furthermore, our second protocol is lightweight at the user end, and it can allow an authorized user to use any device with limited storage and computing capability to perform the range queries over outsourced encrypted data.
|
In this paper, we do not consider the existing secure comparison protocols that are secure under the information theoretic setting. This is because, the existing secure comparison protocols under the information theoretic setting are commonly based on linear secret sharing schemes, such as Shamir's @cite_10 , which require at least three parties. We emphasize that our problem setting is entirely different than those methods since the data in our case are encrypted and our protocols require only two parties. Our protocols, which are based on additive homomorphic encryption schemes, are orthogonal to the secret sharing based SC schemes. Nevertheless, developing a PPRQ protocol by using the secret sharing based SC methods that protect the data access patterns is still an open problem; therefore, it can be treated as an interesting future work.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2141420453"
],
"abstract": [
"In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D . This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces."
]
}
|
1401.3768
|
2155718519
|
With the many benefits of cloud computing, an entity may want to outsource its data and their related analytics tasks to a cloud. When data are sensitive, it is in the interest of the entity to outsource encrypted data to the cloud; however, this limits the types of operations that can be performed on the cloud side. Especially, evaluating queries over the encrypted data stored on the cloud without the entity performing any computation and without ever decrypting the data become a very challenging problem. In this paper, we propose solutions to conduct range queries over outsourced encrypted data. The existing methods leak valuable information to the cloud which can violate the security guarantee of the underlying encryption schemes. In general, the main security primitive used to evaluate range queries is secure comparison (SC) of encrypted integers. However, we observe that the existing SC protocols are not very efficient. To this end, we first propose a novel SC scheme that takes encrypted integers and outputs encrypted comparison result. We empirically show its practical advantage over the current state-of-the-art. We then utilize the proposed SC scheme to construct two new secure range query protocols. Our protocols protect data confidentiality, privacy of user's query, and also preserve the semantic security of the encrypted data; therefore, they are more secure than the existing protocols. Furthermore, our second protocol is lightweight at the user end, and it can allow an authorized user to use any device with limited storage and computing capability to perform the range queries over outsourced encrypted data.
|
On the other hand, there exist a large number of custom-designed SC protocols (e.g., @cite_54 @cite_23 @cite_2 ) that directly operate on encrypted inputs. Since the goal of this paper is not to investigate all the existing SC protocols, we simply refer to the most efficient known implementation of SC (here we consider methods based on Paillier cryptosystem to have a fair comparison with our scheme) that was proposed by @cite_2 . We emphasize that the SC protocol given in @cite_2 requires the encryptions of individual bits of @math and @math as the input rather than @math . Though their protocol is efficient than the above garbled-circuit based SC method (i.e., ) for smaller input domain sizes, we show that our SC scheme outperforms both the methods for all practical values of input domain sizes (see Section for details). Also, it is worth pointing out that the protocol in @cite_2 leaks the comparison result @math to at least one of the involved parties. However, by using the techniques in @cite_23 , we can easily modify (at the expense of extra cost) the protocol of @cite_2 to generate @math as the output without revealing @math to both parties.
|
{
"cite_N": [
"@cite_54",
"@cite_23",
"@cite_2"
],
"mid": [
"1935182841",
"1819119697",
"2104145890"
],
"abstract": [
"Yao's classical millionaires' problem is about securely determining whether x > y, given two input values x, y, which are held as private inputs by two parties, respectively. The output x > y becomes known to both parties. In this paper, we consider a variant of Yao's problem in which the inputs x, y as well as the output bit x > y are encrypted. Referring to the framework of secure n-party computation based on threshold homomorphic cryptosystems as put forth by Cramer, Damgard, and Nielsen at Eurocrypt 2001, we develop solutions for integer comparison, which take as input two lists of encrypted bits representing x and y, respectively, and produce an encrypted bit indicating whether x > y as output. Secure integer comparison is an important building block for applications such as secure auctions. In this paper, our focus is on the two-party case, although most of our results extend to the multi-party case. We propose new logarithmicround and constant-round protocols for this setting, which achieve simultaneously very low communication and computational complexities. We analyze the protocols in detail and show that our solutions compare favorably to other known solutions.",
"We propose a protocol for secure comparison of integers based on homomorphic encryption. We also propose a homomorphic encryption scheme that can be used in our protocol and makes it more efficient than previous solutions. Our protocol is well-suited for application in on-line auctions, both with respect to functionality and performance. It minimizes the amount of information bidders need to send, and for comparison of 16 bit numbers with security based on 1024 bit RSA (executed by two parties), our implementation takes 0.28 seconds including all computation and communication. Using precomputation, one can save a factor of roughly 10.",
"We consider the problem of securely evaluating the Greater Than (GT) predicate and its extension - transferring one of two secrets, depending on the result of comparison. We generalize our solutions and show how to securely decide membership in the union of a set of intervals. We then consider the related problem of comparing two encrypted numbers. We show how to efficiently apply our solutions to practical settings, such as auctions with the semi-honest auctioneer, proxy selling, etc. All of our protocols are one round. We propose new primitives, Strong Conditional Oblivious Transfer (SCOT) and Conditional En- crypted Mapping (CEM), which capture common security properties of one round protocols in a variety of settings, which may be of independent interest."
]
}
|
1401.3677
|
1994267277
|
The spatial structure of transmitters in wireless networks plays a key role in evaluating the mutual interference and hence the performance. Although the Poisson point process (PPP) has been widely used to model the spatial configuration of wireless networks, it is not suitable for networks with repulsion. The Ginibre point process (GPP) is one of the main examples of determinantal point processes that can be used to model random phenomena where repulsion is observed. Considering the accuracy, tractability and practicability tradeoffs, we introduce and promote the @math -GPP, an intermediate class between the PPP and the GPP, as a model for wireless networks when the nodes exhibit repulsion. To show that the model leads to analytically tractable results in several cases of interest, we derive the mean and variance of the interference using two different approaches: the Palm measure approach and the reduced second moment approach, and then provide approximations of the interference distribution by three known probability density functions. Besides, to show that the model is relevant for cellular systems, we derive the coverage probability of the typical user and also find that the fitted @math -GPP can closely model the deployment of actual base stations in terms of the coverage probability and other statistics.
|
Stochastic geometry models have been successfully applied to model and analyze wireless networks in the last two decades since they not only capture the topological randomness in the network geometry but also lead to tractable analytical results @cite_15 . The PPP is by far the most popular point process used in the literature because of its tractability. Models based on the PPP have been used for a variety of networks, including cellular networks, mobile ad hoc networks, cognitive radio networks, and wireless sensor networks, and the performance of PPP-based networks is well characterized and well understood (see, e.g., @cite_7 @cite_15 @cite_13 @cite_10 and references therein). Although the PPP model provides many useful theoretical results, the independence of the node locations makes the PPP a dubious model for actual network deployments where the wireless nodes appear spatially negatively correlated, i.e., where the nodes exhibit repulsion. Hence, point processes that consider the spatial correlations, such as the MHCP and the Strauss process, have been explored recently since they can better capture the spatial distribution of the network nodes in real deployments @cite_11 @cite_8 @cite_9 . However, their limited tractability impedes further applications in wireless networks and leaves many challenges to be addressed.
|
{
"cite_N": [
"@cite_13",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"2006166592",
"2127337238",
"2167353407",
"2156356530",
"2145873277",
"2039688938",
"631335369"
],
"abstract": [
"This volume bears on wireless network modeling and performance analysis. The aim is to show how stochastic geometry can be used in a more or less systematic way to analyze the phenomena that arise in this context. It first focuses on medium access control mechanisms used in ad hoc networks and in cellular networks. It then discusses the use of stochastic geometry for the quantitative analysis of routing algorithms in mobile ad hoc networks. The appendix also contains a concise summary of wireless communication principles and of the network architectures considered in the two volumes.",
"The performance of wireless networks depends critically on their spatial configuration, because received signal power and interference depend critically on the distances between numerous transmitters and receivers. This is particularly true in emerging network paradigms that may include femtocells, hotspots, relays, white space harvesters, and meshing approaches, which are often overlaid with traditional cellular networks. These heterogeneous approaches to providing high-capacity network access are characterized by randomly located nodes, irregularly deployed infrastructure, and uncertain spatial configurations due to factors like mobility and unplanned user-installed access points. This major shift is just beginning, and it requires new design approaches that are robust to spatial randomness, just as wireless links have long been designed to be robust to fading. The objective of this article is to illustrate the power of spatial models and analytical techniques in the design of wireless networks, and to provide an entry-level tutorial.",
"The spatial structure of base stations (BSs) in cellular networks plays a key role in evaluating the downlink performance. In this paper, different spatial stochastic models (the Poisson point process (PPP), the Poisson hard-core process (PHCP), the Strauss process (SP), and the perturbed triangular lattice) are used to model the structure by fitting them to the locations of BSs in real cellular networks obtained from a public database. We provide two general approaches for fitting. One is fitting by the method of maximum pseudolikelihood. As for the fitted models, it is not sufficient to distinguish them conclusively by some classical statistics. We propose the coverage probability as the criterion for the goodness-of-fit. In terms of coverage, the SP provides a better fit than the PPP and the PHCP. The other approach is fitting by the method of minimum contrast that minimizes the average squared error of the coverage probability. This way, fitted models are obtained whose coverage performance matches that of the given data set very accurately. Furthermore, we introduce a novel metric, the deployment gain, and we demonstrate how it can be used to estimate the coverage performance and average rate achieved by a data set.",
"Matern hard core processes of types I and II are the point processes of choice to model concurrent transmitters in CSMA networks. We determine the mean interference observed at a node of the process and compare it with the mean interference in a Poisson point process of the same density. It turns out that despite the similarity of the two models, they behave rather differently. For type I, the excess interference (relative to the Poisson case) increases exponentially in the hard-core distance, while for type II, the gap never exceeds 1 dB.",
"Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes, mathematical techniques have been developed in the last decade to provide communication-theoretic results accounting for the networks geometrical configuration. Often, the location of the nodes in the network can be modeled as random, following for example a Poisson point process. In this case, different techniques based on stochastic geometry and the theory of random geometric graphs -including point process theory, percolation theory, and probabilistic combinatorics-have led to results on the connectivity, the capacity, the outage probability, and other fundamental limits of wireless networks. This tutorial article surveys some of these techniques, discusses their application to model wireless networks, and presents some of the main results that have appeared in the literature. It also serves as an introduction to the field for the other papers in this special issue.",
"For more than three decades, stochastic geometry has been used to model large-scale ad hoc wireless networks, and it has succeeded to develop tractable models to characterize and better understand the performance of these networks. Recently, stochastic geometry models have been shown to provide tractable yet accurate performance bounds for multi-tier and cognitive cellular wireless networks. Given the need for interference characterization in multi-tier cellular networks, stochastic geometry models provide high potential to simplify their modeling and provide insights into their design. Hence, a new research area dealing with the modeling and analysis of multi-tier and cognitive cellular wireless networks is increasingly attracting the attention of the research community. In this article, we present a comprehensive survey on the literature related to stochastic geometry models for single-tier as well as multi-tier and cognitive cellular wireless networks. A taxonomy based on the target network model, the point process used, and the performance evaluation technique is also presented. To conclude, we discuss the open research challenges and future research directions.",
"Covering point process theory, random geometric graphs and coverage processes, this rigorous introduction to stochastic geometry will enable you to obtain powerful, general estimates and bounds of wireless network performance and make good design choices for future wireless architectures and protocols that efficiently manage interference effects. Practical engineering applications are integrated with mathematical theory, with an understanding of probability the only prerequisite. At the same time, stochastic geometry is connected to percolation theory and the theory of random geometric graphs and accompanied by a brief introduction to the R statistical computing language. Combining theory and hands-on analytical techniques with practical examples and exercises, this is a comprehensive guide to the spatial stochastic models essential for modelling and analysis of wireless network performance."
]
}
|
1401.3615
|
1766386015
|
We examine the Xeon Phi, which is based on Intel's Many Integrated Cores architecture, for its suitability to run the FDK algorithm--the most commonly used algorithm to perform the 3D image reconstruction in cone-beam computed tomography. We study the challenges of efficiently parallelizing the application and means to enable sensible data sharing between threads despite the lack of a shared last level cache. Apart from parallelization, SIMD vectorization is critical for good performance on the Xeon Phi; we perform various micro-benchmarks to investigate the platform's new set of vector instructions and put a special emphasis on the newly introduced vector gather capability. We refine a previous performance model for the application and adapt it for the Xeon Phi to validate the performance of our optimized hand-written assembly implementation, as well as the performance of several different auto-vectorization approaches.
|
Due to its medical relevance, reconstruction in computed tomography is a well-examined problem. As vendors for CT devices are constantly on the lookout for ways to speed up the reconstruction time, many computer architectures have been evaluated over time. Initially products in this field used special purpose hardware based on FPGA and DSP designs @cite_15 . The Cell Broadband Engine, which at the time of its release provided unrivaled memory bandwidth, was also subject to experimentation @cite_9 @cite_5 . It is noteworthy that CT reconstruction was among the first non-graphics applications that were run graphics processors @cite_6 .
|
{
"cite_N": [
"@cite_9",
"@cite_15",
"@cite_6",
"@cite_5"
],
"mid": [
"2082344075",
"",
"2055844022",
"1972015875"
],
"abstract": [
"Tomographic image reconstruction, such as the reconstruction of computed tomography projection values, of tomosynthesis data, positron emission tomography or SPECT events, and of magnetic resonance imaging data is computationally very demanding. One of the most time-consuming steps is the backprojection. Recently, a novel general purpose architecture optimized for distributed computing became available: the cell broadband engine (CBE). To maximize image reconstruction speed we modified our parallel-beam backprojection algorithm [two dimensional (2D)] and our perspective backprojection algorithm [three dimensional (3D), cone beam for flat–panel detectors] and optimized the code for the CBE. The algorithms are pixel or voxel driven, run with floating point accuracy and use linear (LI) or nearest neighbor (NN) interpolation between detector elements. For the parallel-beam case, 512 projections per half rotation, 1024 detector channels, and an image of size 512 2 was used. The cone-beam backprojection performance was assessed by backprojecting a full circle scan of 512 projections of size 1024 2 into a volume of size 512 3 voxels. The field of view was chosen to completely lie within the field of measurement and the pixel or voxel size was set to correspond to the detector element size projected to the center of rotation divided by 2 . Both the PC and the CBE were clocked at 3 GHz . For the parallel backprojection of 512 projections into a 512 2 image, a throughput of 11 fps (LI) and 15 fps (NN) was measured on the PC, whereas the CBE achieved 126 fps (LI) and 165 fps (NN), respectively. The cone-beam backprojection of 512 projections into the 512 3 volume took 3.2 min on the PC and is as fast as 13.6 s on the cell. Thereby, the cell greatly outperforms today’s top-notch backprojections based on graphical processing units. Using both CBEs of our dual cell-based blade (Mercury Computer Systems) allows to 2D backproject 330 images s and one can complete the 3D cone-beam backprojection in 6.8 s .",
"",
"The graphics processing unit (GPU) has emerged as a competitive platform for computing massively parallel problems. Many computing applications in medical physics can be formulated as data-parallel tasks that exploit the capabilities of the GPU for reducing processing times. The authors review the basic principles of GPU computing as well as the main performance optimization techniques, and survey existing applications in three areas of medical physics, namely image reconstruction,dose calculation and treatment plan optimization, and image processing.",
"We present an evaluation of state-of-the-art computer hardware architectures for implementing the FDK method, which solves the 3-D image reconstruction task in cone-beam computed tomography (CT). The computational complexity of the FDK method prohibits its use for many clinical applications unless appropriate hardware acceleration is employed. Today's most powerful hardware architectures for high-performance computing applications are based on standard multi-core processors, off-the-shelf graphics boards, the Cell Broadband Engine Architecture (CBEA), or customized accelerator platforms (e.g., FPGA-based computer components). For each hardware platform under consideration, we describe a thoroughly optimized implementation of the most time-consuming parts of the FDK algorithm; the filtering step as well as the subsequent back-projection step. We further explain the required code transformations to parallelize the algorithm for the respective target architecture. We compare both the implementation complexity and the resulting performance of all architectures under consideration using the same two medical datasets which have been acquired using a standard C-arm device. Our optimized back-projection implementations achieve at least a speedup of 6.5 (CBEA, two processors), 22.0 (GPU, single board), and 35.8 (FPGA, 9 chips) compared to a standard workstation equipped with a quad-core processor."
]
}
|
1401.2818
|
2949812071
|
We present a statistical model for @math D human faces in varying expression, which decomposes the surface of the face using a wavelet transform, and learns many localized, decorrelated multilinear models on the resulting coefficients. Using this model we are able to reconstruct faces from noisy and occluded @math D face scans, and facial motion sequences. Accurate reconstruction of face shape is important for applications such as tele-presence and gaming. The localized and multi-scale nature of our model allows for recovery of fine-scale detail while retaining robustness to severe noise and occlusion, and is computationally efficient and scalable. We validate these properties experimentally on challenging data in the form of static scans and motion sequences. We show that in comparison to a global multilinear model, our model better preserves fine detail and is computationally faster, while in comparison to a localized PCA model, our model better handles variation in expression, is faster, and allows us to fix identity parameters for a given subject.
|
The most related works to our work are part-based multilinear models that were recently proposed to model 3D human body shapes @cite_30 . To define the part-based model, a segmentation of the training shapes into meaningful parts is required. This is done manually by segmenting the human models into body parts, such as limbs. @cite_10 use a similar statistical model on human spines, that are manually segmented into its vertebrae. In contrast, our method computes a suitable hierarchical decomposition automatically, thereby eliminating the need to manually generate a meaningful segmentation.
|
{
"cite_N": [
"@cite_30",
"@cite_10"
],
"mid": [
"2026861142",
"2131202735"
],
"abstract": [
"In this paper, we present a novel approach to model 3D human body with variations on both human shape and pose, by exploring a tensor decomposition technique. 3D human body modeling is important for 3D reconstruction and animation of realistic human body, which can be widely used in Tele-presence and video game applications. It is challenging due to a wide range of shape variations over different people and poses. The existing SCAPE model is popular in computer vision for modeling 3D human body. However, it considers shape and pose deformations separately, which is not accurate since pose deformation is person-dependent. Our tensor-based model addresses this issue by jointly modeling shape and pose deformations. Experimental results demonstrate that our tensor-based model outperforms the SCAPE model quite significantly. We also apply our model to capture human body using Microsoft Kinect sensors with excellent results.",
"Severe cases of spinal deformities such as scoliosis are usually treated by a surgery where instrumentation (hooks, screws and rods) is installed to the spine to correct deformities. Even if the purpose is to obtain a normal spine curve, the result is often straighter than normal. In this paper, we propose a fast statistical reconstruction algorithm based on a general model which can deal with such instrumented spines. To this end, we present the concept of multilevel statistical model where the data are decomposed into a within-group and a between-group component. The reconstruction procedure is formulated as a second-order cone program which can be solved very fast (few tenths of a second). Reconstruction errors were evaluated on real patient data and results showed that multilevel modeling allows better 3D reconstruction than classical models."
]
}
|
1401.2818
|
2949812071
|
We present a statistical model for @math D human faces in varying expression, which decomposes the surface of the face using a wavelet transform, and learns many localized, decorrelated multilinear models on the resulting coefficients. Using this model we are able to reconstruct faces from noisy and occluded @math D face scans, and facial motion sequences. Accurate reconstruction of face shape is important for applications such as tele-presence and gaming. The localized and multi-scale nature of our model allows for recovery of fine-scale detail while retaining robustness to severe noise and occlusion, and is computationally efficient and scalable. We validate these properties experimentally on challenging data in the form of static scans and motion sequences. We show that in comparison to a global multilinear model, our model better preserves fine detail and is computationally faster, while in comparison to a localized PCA model, our model better handles variation in expression, is faster, and allows us to fix identity parameters for a given subject.
|
Many statistical models have been used to analyze human faces. The first statistical model for the analysis of @math D faces was proposed by Blanz and Vetter @cite_17 . This model is called the morphable model, and uses Principal Component Analysis (PCA) to analyze shape and texture of registered faces, mainly in neutral expression. It is applied to reconstruct @math D facial shapes from images @cite_17 and @math D face scans @cite_29 @cite_9 . @cite_6 extend the morphable model to consider expressions, by combining it with a PCA model for expression offsets with respect to the neutral expression geometry. An alternative way to incorporate expression changes is to use use a multilinear model, which separates identity and expression variations. This model has been used to modify expressions in videos @cite_21 @cite_23 @cite_1 , or to register and analyze @math D motion sequences @cite_15 . Multilinear models are mathematically equivalent to TensorFaces @cite_24 applied to @math D data rather than images, and provide an effective way to capture both identity and expression variations, and thus in Section we compare to a global multilinear model and show that our model better captures local geometric detail.
|
{
"cite_N": [
"@cite_9",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_24",
"@cite_23",
"@cite_15",
"@cite_17"
],
"mid": [
"",
"2120064431",
"",
"",
"2148457221",
"",
"1995712965",
"2052243599",
"2237250383"
],
"abstract": [
"",
"This paper presents a top-down approach to 3D data analysis by fitting a morphable model to scans of faces. In a unified framework, the algorithm optimizes shape, texture, pose and illumination simultaneously. The algorithm can be used as a core component in face recognition from scans. In an analysis-by-synthesis approach, raw scans are transformed into a PCA-based representation that is robust with respect to changes in pose and illumination. Illumination conditions are estimated in an explicit simulation that involves specular and diffuse components. The algorithm inverts the effect of shading in order to obtain the diffuse reflectance in each point of the facial surface. Our results include illumination correction, surface completion and face recognition on the FRGC database of scans.",
"",
"",
"We describe an expression-invariant method for face recognition by fitting an identity expression separated 3D Morphable Model to shape data. The expression model greatly improves recognition and retrieval rates in the uncooperative setting, while achieving recognition rates on par with the best recognition algorithms in the face recognition great vendor test. The fitting is performed with a robust nonrigid ICP algorithm. It is able to perform face recognition in a fully automated scenario and on noisy data. The system was evaluated on two datasets, one with a high noise level and strong expressions, and the standard UND range scan database, showing that while expression invariance increases recognition and retrieval performance for the expression dataset, it does not decrease performance on the neutral dataset. The high recognition rates are achieved even with a purely shape based method, without taking image data into account.",
"",
"We present a method for replacing facial performances in video. Our approach accounts for differences in identity, visual appearance, speech, and timing between source and target videos. Unlike prior work, it does not require substantial manual operation or complex acquisition hardware, only single-camera video. We use a 3D multilinear model to track the facial performance in both videos. Using the corresponding 3D geometry, we warp the source to the target face and retime the source to match the target performance. We then compute an optimal seam through the video volume that maintains temporal consistency in the final composite. We showcase the use of our method on a variety of examples and present the result of a user study that suggests our results are difficult to distinguish from real video footage.",
"We perform statistical analysis of 3D facial shapes in motion over different subjects and different motion sequences. For this, we represent each motion sequence in a multilinear model space using one vector of coefficients for identity and one high-dimensional curve for the motion. We apply the resulting statistical model to two applications: to synthesize motion sequences, and to perform expression recognition. En route to building the model, we present a fully automatic approach to register 3D facial motion data, Based on a multilinear model, and show that the resulting registrations are of high quality.",
"In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness."
]
}
|
1401.2912
|
2950552949
|
The k-means++ seeding algorithm is one of the most popular algorithms that is used for finding the initial @math centers when using the k-means heuristic. The algorithm is a simple sampling procedure and can be described as follows: Pick the first center randomly from the given points. For @math , pick a point to be the @math center with probability proportional to the square of the Euclidean distance of this point to the closest previously @math chosen centers. The k-means++ seeding algorithm is not only simple and fast but also gives an @math approximation in expectation as shown by Arthur and Vassilvitskii. There are datasets on which this seeding algorithm gives an approximation factor of @math in expectation. However, it is not clear from these results if the algorithm achieves good approximation factor with reasonably high probability (say @math ). Brunsch and Roglin gave a dataset where the k-means++ seeding algorithm achieves an @math approximation ratio with probability that is exponentially small in @math . However, this and all other known lower-bound examples are high dimensional. So, an open problem was to understand the behavior of the algorithm on low dimensional datasets. In this work, we give a simple two dimensional dataset on which the seeding algorithm achieves an @math approximation ratio with probability exponentially small in @math . This solves open problems posed by and by Brunsch and Roglin.
|
Arthur and Vassilvitskii @cite_10 showed that the sampling algorithm gives an approximation guarantee of @math in expectation. They also give an example dataset on which this approximation guarantee is best possible. Ailon al @cite_12 and Aggarwal al @cite_2 showed that sampling more than @math centers in the manner described above gives a constant pseudo-approximation . Here pseudo-approximation means that the algorithm is allowed to output more than @math centers but the approximation factor is computed by comparing with the optimal solution with @math centers. Ackermann and Bl " o mer @cite_1 showed that the results of Arthur and Vassilvitskii @cite_10 may be extended to a large class of other distance measures. Jaiswal al @cite_7 showed that the seeding algorithm may be appropriately modified to give a @math -approximation algorithm for the k-means problem. Jaiswal and Garg @cite_6 and Agarwal al @cite_3 showed that if the dataset satisfies certain separation conditions, then the seeding algorithm gives constant approximation with probability @math . Bahmani al @cite_5 showed that the seeding algorithm performs well even when fewer than @math sampling iterations are executed provided that more than one center is chosen in a sampling iteration. We now discuss our main results.
|
{
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_10",
"@cite_12"
],
"mid": [
"2154200889",
"1561129100",
"130123586",
"2204035083",
"1530581016",
"",
"2073459066",
"2156499390"
],
"abstract": [
"Given a set of points @math , the k-means clustering problem is to find a set of k centers @math , such that the objective function ? x?P e(x,C)2, where e(x,C) denotes the Euclidean distance between x and the closest center in C, is minimized. This is one of the most prominent objective functions that has been studied with respect to clustering. D 2-sampling (Arthur and Vassilvitskii, Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA'07, pp. 1027---1035, SIAM, Philadelphia, 2007) is a simple non-uniform sampling technique for choosing points from a set of points. It works as follows: given a set of points @math , the first point is chosen uniformly at random from P. Subsequently, a point from P is chosen as the next sample with probability proportional to the square of the distance of this point to the nearest previously sampled point. D 2-sampling has been shown to have nice properties with respect to the k-means clustering problem. Arthur and Vassilvitskii (Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA'07, pp. 1027---1035, SIAM, Philadelphia, 2007) show that k points chosen as centers from P using D 2-sampling give an O(logk) approximation in expectation. (NIPS, pp. 10---18, 2009) and (Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pp. 15---28, Springer, Berlin, 2009) extended results of Arthur and Vassilvitskii (Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA'07, pp. 1027---1035, SIAM, Philadelphia, 2007) to show that O(k) points chosen as centers using D 2-sampling give an O(1) approximation to the k-means objective function with high probability. In this paper, we further demonstrate the power of D 2-sampling by giving a simple randomized (1+∈)-approximation algorithm that uses the D 2-sampling in its core.",
"The Bregman k-median problem is defined as follows. Given a Bregman divergence Dφ and a finite set @math of size n, our goal is to find a set C of size k such that the sum of errors cost(P,C)=∑p∈P min c∈C Dφ(p,c) is minimized. The Bregman k-median problem plays an important role in many applications, e.g., information theory, statistics, text classification, and speech processing. We study a generalization of the kmeans++ seeding of Arthur and Vassilvitskii (SODA '07). We prove for an almost arbitrary Bregman divergence that if the input set consists of k well separated clusters, then with probability @math this seeding step alone finds an @math -approximate solution. Thereby, we generalize an earlier result of (FOCS '06) from the case of the Euclidean k-means problem to the Bregman k-median problem. Additionally, this result leads to a constant factor approximation algorithm for the Bregman k-median problem using at most @math arithmetic operations, including evaluations of Bregman divergence Dφ.",
"k-means++ [5] seeding procedure is a simple sampling based algorithm that is used to quickly find k centers which may then be used to start the Lloyd’s method. There has been some progress recently on understanding this sampling algorithm. [10] showed that if the data satisfies the separation condition that ( k-1 (P) (P) c ) (Δ i (P) is the optimal cost w.r.t. i centers, c > 1 is a constant, and P is the point set), then the sampling algorithm gives an O(1)-approximation for the k-means problem with probability that is exponentially small in k. Here, the distance measure is the squared Euclidean distance. Ackermann and Blomer [2] showed the same result when the distance measure is any μ-similar Bregman divergence. Arthur and Vassilvitskii [5] showed that the k-means++ seeding gives an O(logk) approximation in expectation for the k-means problem. They also give an instance where k-means++ seeding gives Ω(logk) approximation in expectation. However, it was unresolved whether the seeding procedure gives an O(1) approximation with probability ( ( 1 poly(k) ) ), even when the data satisfies the above-mentioned separation condition. Brunsch and Roglin [8] addressed this question and gave an instances on which k-means++ achieves an approximation ratio of (2 3 − e) ·logk only with exponentially small probability. However, the instances that they give satisfy ( k-1 (P) (P) = 1+o(1) ). In this work, we show that the sampling algorithm gives an O(1) approximation with probability ( ( 1 k ) ) for any k-means problem instance where the point set satisfy separation condition ( k-1 (P) (P) 1 + ), for some fixed constant γ. Our results hold for any distance measure that is a metric in an approximate sense. For point sets that do not satisfy the above separation condition, we show O(1) approximation with probability Ω(2− 2k ).",
"The Lloyd’s algorithm, also known as the k-means algorithm, is one of the most popular algorithms for solving the k-means clustering problem in practice. However, it does not give any performance guarantees. This means that there are datasets on which this algorithm can behave very badly. One reason for poor performance on certain datasets is bad initialization. The following simple sampling based seeding algorithm tends to fix this problem: pick the first center randomly from among the given points and then for i ≥ 2, pick a point to be the i th center with probability proportional to the squared distance of this point from the previously chosen centers. This algorithm is more popularly known as the k-means++ seeding algorithm and is known to exhibit some nice properties. These have been studied in a number of previous works [AV07, AJM09, ADK09, BR11]. The algorithm tends to perform well when the optimal clusters are separated in some sense. This is because the algorithm gives preference to further away points when picking centers. [ORSS06] discuss one such separation condition on the data. Jaiswal and Garg [JG12] show that if the dataset satisfies the separation condition of [ORSS06], then the sampling algorithm gives a constant approximation with probability Ω(1 k). Another separation condition that is strictly weaker than [ORSS06] is the approximation stability condition discussed by [BBG09]. In this work, we show that the sampling algorithm gives a constant approximation with probability Ω(1 k) if the dataset satisfies the separation condition of [BBG09] and the optimal clusters are not too small. We give a negative result for datasets that have small optimal clusters.",
"We show that adaptively sampled O (k ) centers give a constant factor bi-criteria approximation for the k -means problem, with a constant probability. Moreover, these O (k ) centers contain a subset of k centers which give a constant factor approximation, and can be found using LP-based techniques of Jain and Vazirani [JV01] and [CGTS02]. Both these algorithms run in effectively O (nkd ) time and extend the O (logk )-approximation achieved by the k -means++ algorithm of Arthur and Vassilvitskii [AV07].",
"",
"The k-means method is a widely used clustering technique that seeks to minimize the average squared distance between points in the same cluster. Although it offers no accuracy guarantees, its simplicity and speed are very appealing in practice. By augmenting k-means with a very simple, randomized seeding technique, we obtain an algorithm that is Θ(logk)-competitive with the optimal clustering. Preliminary experiments show that our augmentation improves both the speed and the accuracy of k-means, often quite dramatically.",
"We provide a clustering algorithm that approximately optimizes the k-means objective, in the one-pass streaming setting. We make no assumptions about the data, and our algorithm is very light-weight in terms of memory, and computation. This setting is applicable to unsupervised learning on massive data sets, or resource-constrained devices. The two main ingredients of our theoretical work are: a derivation of an extremely simple pseudo-approximation batch algorithm for k-means (based on the recent k-means++), in which the algorithm is allowed to output more than k centers, and a streaming clustering algorithm in which batch clustering algorithms are performed on small inputs (fitting in memory) and combined in a hierarchical manner. Empirical evaluations on real and simulated data reveal the practical utility of our method."
]
}
|
1401.2422
|
2951572681
|
In this paper, we study codes with locality that can recover from two erasures via a sequence of two local, parity-check computations. By a local parity-check computation, we mean recovery via a single parity-check equation associated to small Hamming weight. Earlier approaches considered recovery in parallel; the sequential approach allows us to potentially construct codes with improved minimum distance. These codes, which we refer to as locally 2-reconstructible codes, are a natural generalization along one direction, of codes with all-symbol locality introduced by Gopalan , in which recovery from a single erasure is considered. By studying the Generalized Hamming Weights of the dual code, we derive upper bounds on the minimum distance of locally 2-reconstructible codes and provide constructions for a family of codes based on Tur 'an graphs, that are optimal with respect to this bound. The minimum distance bound derived here is universal in the sense that no code which permits all-symbol local recovery from @math erasures can have larger minimum distance regardless of approach adopted. Our approach also leads to a new bound on the minimum distance of codes with all-symbol locality for the single-erasure case.
|
Explicit constructions of optimal codes with all-symbol locality for the single erasure case are provided in @cite_2 , @cite_5 , respectively based on Gabidullin maximum rank-distance and Reed-Solomon codes. Families of codes with all-symbol locality with small alphabet size (low field size) are constructed in @cite_13 . Locality in the context of non-linear codes is considered in @cite_16 . Codes with local regeneration are considered in @cite_10 , @cite_7 , @cite_14 . Studies on the implementation and performance evaluation of codes with locality can be found in @cite_1 @cite_12 .
|
{
"cite_N": [
"@cite_13",
"@cite_14",
"@cite_7",
"@cite_1",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_12"
],
"mid": [
"1997044393",
"2951438316",
"2950599147",
"154253821",
"",
"2950878776",
"",
"2170681444",
""
],
"abstract": [
"A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most r ) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter r is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over r points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data (“hot data”).",
"Node failures are inevitable in distributed storage systems (DSS). To enable efficient repair when faced with such failures, two main techniques are known: Regenerating codes, i.e., codes that minimize the total repair bandwidth; and codes with locality, which minimize the number of nodes participating in the repair process. This paper focuses on regenerating codes with locality, using pre-coding based on Gabidulin codes, and presents constructions that utilize minimum bandwidth regenerating (MBR) local codes. The constructions achieve maximum resilience (i.e., optimal minimum distance) and have maximum capacity (i.e., maximum rate). Finally, the same pre-coding mechanism can be combined with a subclass of fractional-repetition codes to enable maximum resilience and repair-by-transfer simultaneously.",
"This paper presents a new explicit construction for locally repairable codes (LRCs) for distributed storage systems which possess all-symbols locality and maximal possible minimum distance, or equivalently, can tolerate the maximal number of node failures. This construction, based on maximum rank distance (MRD) Gabidulin codes, provides new optimal vector and scalar LRCs. In addition, the paper also discusses mechanisms by which codes obtained using this construction can be used to construct LRCs with efficient repair of failed nodes by combination of LRC with regenerating codes.",
"Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere, at any time, and only pay for what they use and store. To provide durability for that data and to keep the cost of storage low, WAS uses erasure coding. In this paper we introduce a new set of codes for erasure coding called Local Reconstruction Codes (LRC). LRC reduces the number of erasure coding fragments that need to be read when reconstructing data fragments that are offline, while still keeping the storage overhead low. The important benefits of LRC are that it reduces the bandwidth and I Os required for repair reads over prior codes, while still allowing a significant reduction in storage overhead. We describe how LRC is used in WAS to provide low overhead durable storage with consistently low read latencies.",
"",
"Petabyte-scale distributed storage systems are currently transitioning to erasure codes to achieve higher storage efficiency. Classical codes like Reed-Solomon are highly sub-optimal for distributed environments due to their high overhead in single-failure events. Locally Repairable Codes (LRCs) form a new family of codes that are repair efficient. In particular, LRCs minimize the number of nodes participating in single node repairs during which they generate small network traffic. Two large-scale distributed storage systems have already implemented different types of LRCs: Windows Azure Storage and the Hadoop Distributed File System RAID used by Facebook. The fundamental bounds for LRCs, namely the best possible distance for a given code locality, were recently discovered, but few explicit constructions exist. In this work, we present an explicit and optimal LRCs that are simple to construct. Our construction is based on grouping Reed-Solomon (RS) coded symbols to obtain RS coded symbols over a larger finite field. We then partition these RS symbols in small groups, and re-encode them using a simple local code that offers low repair locality. For the analysis of the optimality of the code, we derive a new result on the matroid represented by the code generator matrix.",
"",
"Regenerating codes and codes with locality are schemes recently proposed for a distributed storage network. While regenerating codes minimize the data downloaded for node repair, codes with locality minimize the number of nodes accessed during repair. In this paper, we provide some constructions of codes with locality, in which the local codes are regenerating codes, thereby combining the advantages of both classes of codes. The proposed constructions achieve an upper bound on minimum distance and are hence optimal. The constructions include both the cases when the local regenerating codes correspond to the MSR point as well as the MBR point on the storage repair-bandwidth tradeoff curve.",
""
]
}
|
1401.2422
|
2951572681
|
In this paper, we study codes with locality that can recover from two erasures via a sequence of two local, parity-check computations. By a local parity-check computation, we mean recovery via a single parity-check equation associated to small Hamming weight. Earlier approaches considered recovery in parallel; the sequential approach allows us to potentially construct codes with improved minimum distance. These codes, which we refer to as locally 2-reconstructible codes, are a natural generalization along one direction, of codes with all-symbol locality introduced by Gopalan , in which recovery from a single erasure is considered. By studying the Generalized Hamming Weights of the dual code, we derive upper bounds on the minimum distance of locally 2-reconstructible codes and provide constructions for a family of codes based on Tur 'an graphs, that are optimal with respect to this bound. The minimum distance bound derived here is universal in the sense that no code which permits all-symbol local recovery from @math erasures can have larger minimum distance regardless of approach adopted. Our approach also leads to a new bound on the minimum distance of codes with all-symbol locality for the single-erasure case.
|
provides background on generalized Hamming weights (GHW). Our formulation and approach to the problem are outlined in . An important connection between the @math -cores of @cite_0 and GHW is made in . The upper bound on @math and optimal code constructions can be found in Sections and respectively. The final section, presents the analogous @math bound for the single-erasure case. The proofs of most statements appear in the Appendix.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"1993830711"
],
"abstract": [
"Consider a linear [n,k,d]q code C. We say that the ith coordinate of C has locality r , if the value at this coordinate can be recovered from accessing some other r coordinates of C. Data storage applications require codes with small redundancy, low locality for information coordinates, large distance, and low locality for parity coordinates. In this paper, we carry out an in-depth study of the relations between these parameters. We establish a tight bound for the redundancy n-k in terms of the message length, the distance, and the locality of information coordinates. We refer to codes attaining the bound as optimal. We prove some structure theorems about optimal codes, which are particularly strong for small distances. This gives a fairly complete picture of the tradeoffs between codewords length, worst case distance, and locality of information symbols. We then consider the locality of parity check symbols and erasure correction beyond worst case distance for optimal codes. Using our structure theorem, we obtain a tight bound for the locality of parity symbols possible in such codes for a broad class of parameter settings. We prove that there is a tradeoff between having good locality and the ability to correct erasures beyond the minimum distance."
]
}
|
1401.2514
|
2084387309
|
We are given a set of sensors at given locations, a set of potential locations for placing base stations (BSs, or sinks), and another set of potential locations for placing wireless relay nodes. There is a cost for placing a BS and a cost for placing a relay. The problem we consider is to select a set of BS locations, a set of relay locations, and an association of sensor nodes with the selected BS locations, so that number of hops in the path from each sensor to its BS is bounded by hmax, and among all such feasible networks, the cost of the selected network is the minimum. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard, and is hard to even approximate within a constant factor. For this problem, we propose a polynomial time approximation algorithm (SmartSelect) based on a relay placement algorithm proposed in our earlier work, along with a modification of the greedy algorithm for weighted set cover. We have analyzed the worst case approximation guarantee for this algorithm. We have also proposed a polynomial time heuristic to improve upon the solution provided by SmartSelect. Our numerical results demonstrate that the algorithms provide good quality solutions using very little computation time in various randomly generated network scenarios.
|
Several variations of the optimal node placement problem have been studied in the literature. @cite_8 @cite_9 @cite_19 studied variations of the problem of , where a minimum number of relays have to be placed in a two-dimensional region (they can be placed anywhere; no locations are given) to obtain a tree spanning a given set of vertices (sources and BS). No QoS constraint was imposed in their formulations. They showed the problems to be NP-Hard, and proposed approximation algorithms. @cite_7 studied the problem of optimal relay placement for @math connectivity. They proposed an @math approximation algorithm for the problem with any @math . However, they also did not impose any QoS constraint.
|
{
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_7",
"@cite_8"
],
"mid": [
"2164743106",
"2109785846",
"2115907477",
"2069002708"
],
"abstract": [
"A wireless sensor network consists of many low-cost, low-power sensor nodes, which can perform sensing, simple computation, and transmission of sensed information. Long distance transmission by sensor nodes is not energy efficient since energy consumption is a superlinear function of the transmission distance. One approach to prolonging network lifetime while preserving network connectivity is to deploy a small number of costly, but more powerful, relay nodes whose main task is communication with other sensor or relay nodes. In this paper, we assume that sensor nodes have communication range r>0, while relay nodes have communication range Rgesr, and we study two versions of relay node placement problems. In the first version, we want to deploy the minimum number of relay nodes so that, between each pair of sensor nodes, there is a connecting path consisting of relay and or sensor nodes. In the second version, we want to deploy the minimum number of relay nodes so that, between each pair of sensor nodes, there is a connecting path consisting solely of relay nodes. We present a polynomial time 7-approximation algorithm for the first problem and a polynomial time (5+epsi)-approximation algorithm for the second problem, where epsi>0 can be any given constant",
"This paper addresses the following relay sensor placement problem: given the set of duty sensors in the plane and the upper bound of the transmission range, compute the minimum number of relay sensors such that the induced topology by all sensors is globally connected. This problem is motivated by practically considering the tradeoff among performance, lifetime, and cost when designing sensor networks. In our study, this problem is modelled by a NP-hard network optimization problem named Steiner Minimum Tree with Minimum number of Steiner Points and bounded edge length (SMT-MSP). In this paper, we propose two approximate algorithms, and conduct detailed performance analysis. The first algorithm has a performance ratio of 3 and the second has a performance ratio of 2.5.",
"We consider the problem of deploying or repairing a sensor network to guarantee a specified level of multi-path connectivity (k-connectivity) between all nodes. Such a guarantee simultaneously provides fault tolerance against node failures and high capacity through multi-path routing. We design and analyze the first algorithms that place an almost-minimum number of additional sensors to augment an existing network into a k-connected network, for any desired parameter k. Our algorithms have provable guarantees on the quality of the solution. Specifically, we prove that the number of additional sensors is within a constant factor of the absolute minimum, for any fixed k. We have implemented greedy and distributed versions of this algorithm, and demonstrate in simulation that they produce high-quality placements for the additional sensors. We are also in the process of using our algorithms to deploy nodes in a physical sensor network using a mobile robot.",
"Abstract In this paper, we study the Steiner tree problem with minimum number of Steiner points and bounded edge-length (STPMSPBEL), which asks for a tree interconnecting a given set of n terminal points and a minimum number of Steiner points such that the Euclidean length of each edge is no more than a given positive constant. This problem has applications in VLSI design, WDM optimal networks and wireless communications. We prove that this problem is NP-complete and present a polynomial time approximation algorithm whose worst-case performance ratio is 5."
]
}
|
1401.2086
|
2401710793
|
The field of stochastic games has been actively pursued over the last seven decades because of several of its important applications in oligopolistic economics. In the past, zerosum stochastic games have been modelled and solved for Nash equilibria using the standard techniques of Markov decision processes. General-sum stochastic games on the contrary have posed difficulty as they cannot be reduced to Markov decision processes. Over the past few decades the quest for algorithms to compute Nash equilibria in general-sum stochastic games has intensified and several important algorithms such as stochastic tracing procedure [Herings and Peeters, 2004], NashQ [Hu and Wellman, 2003], FFQ [Littman, 2001], etc., and their generalised representations such as the optimization problem formulations for various reward structures [Filar and Vrieze, 2004] have been proposed. However, they suffer from either lack of generality or are intractable for even medium sized problems or both. In this paper, we propose three algorithms, OFF-SGSP, ON-SGSP and DON-SGSP, respectively, which we show provide Nash equilibrium strategies for general-sum discounted stochastic games. Here OFF-SGSP is an off-line algorithm while ON-SGSP and DON-SGSP are online algorithms. In particular, we believe that DON-SGSP is the first decentralized on-line algorithm. We show that both our on-line algorithms are computationally efficient.
|
@cite_13 solve stochastic games by formulating intermediate optimization problems, called Multi-Objective Linear Programs (MOLPs). However, the solution concept there is of correlated equilibria and Nash points are a strict subset of this class (and hence are harder to find). Also, the complexity of their algorithm scales exponentially with the problem size.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2135017242"
],
"abstract": [
"Solving multi-agent reinforcement learning problems has proven difficult because of the lack of tractable algorithms. We provide the first approximation algorithm which solves stochastic games with cheap-talk to within ∊ absolute error of the optimal game-theoretic solution, in time polynomial in 1 ∊. Our algorithm extends Murray's and Gordon's (2007) modified Bellman equation which determines the set of all possible achievable utilities; this provides us a truly general framework for multi-agent learning. Further, we empirically validate our algorithm and find the computational cost to be orders of magnitude less than what the theory predicts."
]
}
|
1401.2086
|
2401710793
|
The field of stochastic games has been actively pursued over the last seven decades because of several of its important applications in oligopolistic economics. In the past, zerosum stochastic games have been modelled and solved for Nash equilibria using the standard techniques of Markov decision processes. General-sum stochastic games on the contrary have posed difficulty as they cannot be reduced to Markov decision processes. Over the past few decades the quest for algorithms to compute Nash equilibria in general-sum stochastic games has intensified and several important algorithms such as stochastic tracing procedure [Herings and Peeters, 2004], NashQ [Hu and Wellman, 2003], FFQ [Littman, 2001], etc., and their generalised representations such as the optimization problem formulations for various reward structures [Filar and Vrieze, 2004] have been proposed. However, they suffer from either lack of generality or are intractable for even medium sized problems or both. In this paper, we propose three algorithms, OFF-SGSP, ON-SGSP and DON-SGSP, respectively, which we show provide Nash equilibrium strategies for general-sum discounted stochastic games. Here OFF-SGSP is an off-line algorithm while ON-SGSP and DON-SGSP are online algorithms. In particular, we believe that DON-SGSP is the first decentralized on-line algorithm. We show that both our on-line algorithms are computationally efficient.
|
Both homotopy and linear programming methods proposed by @cite_13 and @cite_6 are tractable only for small-sized problems. The computational complexity of these algorithms may render them infeasible on large state games. In contrast, ON-SGSP is a model-free algorithm with a per-iteration complexity that is linear in @math , allowing for practical implementations on large state game settings (see Section for one such example with a state space cardinality of @math ). We however mention that per-iteration complexity alone is not sufficient to quantify the performance of an algorithm - see Remark .
|
{
"cite_N": [
"@cite_13",
"@cite_6"
],
"mid": [
"2135017242",
"1608527539"
],
"abstract": [
"Solving multi-agent reinforcement learning problems has proven difficult because of the lack of tractable algorithms. We provide the first approximation algorithm which solves stochastic games with cheap-talk to within ∊ absolute error of the optimal game-theoretic solution, in time polynomial in 1 ∊. Our algorithm extends Murray's and Gordon's (2007) modified Bellman equation which determines the set of all possible achievable utilities; this provides us a truly general framework for multi-agent learning. Further, we empirically validate our algorithm and find the computational cost to be orders of magnitude less than what the theory predicts.",
"This paper is the first to introduce an algorithm to compute stationary equilibria in stochastic games, and shows convergence of the algorithm for almost all such games. Moreover, since in general the number of stationary equilibria is overwhelming, we pay attention to the issue of equilibrium selection. We do this by extending the linear tracing procedure to the class of stochastic games, called the stochastic tracing procedure. From a computational point of view, the class of stochastic games possesses substantial difficulties compared to normal form games. Apart from technical difficulties, there are also conceptual difficulties, for instance the question how to extend the linear tracing procedure to the environment of stochastic games.We prove that there is a generic subclass of the class of stochastic games for which the stochastic tracing procedure is a compact one-dimensionalpiecewise differentiable manifold with boundary. Furthermore, we prove that the stochastic tracing procedure generates a unique path leading from any exogenously specified prior belief, to a stationary equilibrium. A well-chosen transformation of variables is used to formulate an everywhere differentiable homotopy function, whose zeros describe the (unique) path generated by the stochastic tracing procedure. Because of differentiability we are able to follow this path using standard path-following techniques. This yields a globally convergent algorithm that is easily and robustly implemented on a computer using existing software routines. As a by-product of our results, we extend a recent result on the generic finiteness of stationary equilibria in stochastic games to oddness of equilibria."
]
}
|
1401.2086
|
2401710793
|
The field of stochastic games has been actively pursued over the last seven decades because of several of its important applications in oligopolistic economics. In the past, zerosum stochastic games have been modelled and solved for Nash equilibria using the standard techniques of Markov decision processes. General-sum stochastic games on the contrary have posed difficulty as they cannot be reduced to Markov decision processes. Over the past few decades the quest for algorithms to compute Nash equilibria in general-sum stochastic games has intensified and several important algorithms such as stochastic tracing procedure [Herings and Peeters, 2004], NashQ [Hu and Wellman, 2003], FFQ [Littman, 2001], etc., and their generalised representations such as the optimization problem formulations for various reward structures [Filar and Vrieze, 2004] have been proposed. However, they suffer from either lack of generality or are intractable for even medium sized problems or both. In this paper, we propose three algorithms, OFF-SGSP, ON-SGSP and DON-SGSP, respectively, which we show provide Nash equilibrium strategies for general-sum discounted stochastic games. Here OFF-SGSP is an off-line algorithm while ON-SGSP and DON-SGSP are online algorithms. In particular, we believe that DON-SGSP is the first decentralized on-line algorithm. We show that both our on-line algorithms are computationally efficient.
|
A popular algorithm with guaranteed convergence to Nash equilibria in general-sum stochastic games is rational learning, proposed by @cite_20 . In their algorithm, each agent @math maintains a prior on what he believes to be other agents' strategy and updates it in a Bayesian manner. Combining this with certain assumptions of absolute continuity and grain of truth, the algorithm there is shown to converge to NE. ON-SGSP operates in a similar setting as that in , except that we do not assume the knowledge of reward functions. ON-SGSP is a model-free online algorithm and unlike , any agent's strategy in ON-SGSP does not depend upon Bayesian estimates of other agents' strategies and hence, their absolute continuity grain of truth assumptions do not apply.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"2097540415"
],
"abstract": [
"Two players are about to play a discounted infinitely repeated bimatrix game. Each player knows his own payoff matrix and chooses a strategy which is a best response to some private beliefs over strategies chosen by his opponent. If both players' beliefs contain a grain of truth (each assigns some positive probability to the strategy chosen by the opponent), then they will eventually (a) accurately predict the future play of the game and (b) play a Nash equilibrium of the repeated game. An immediate corollary is that in playing a Harsanyi-Nash equilibrium of a discounted repeated game of incomplete information about opponents' payoffs, the players will eventually play an equilibrium of the real game as if they had complete information."
]
}
|
1401.1752
|
2131777524
|
Many computer programs have graphical user interfaces (GUIs), which need good layout to make efficient use of the available screen real estate. Most GUIs do not have a fixed layout, but are resizable and able to adapt themselves. Constraints are a powerful tool for specifying adaptable GUI layouts: they are used to specify a layout in a general form, and a constraint solver is used to find a satisfying concrete layout, e.g. for a specific GUI size. The constraint solver has to calculate a new layout every time a GUI is resized or changed, so it needs to be efficient to ensure a good user experience. One approach for constraint solvers is based on the Gauss-Seidel algorithm and successive over-relaxation (SOR). Our observation is that a solution after resizing or changing is similar in structure to a previous solution. Thus, our hypothesis is that we can increase the computational performance of an SOR-based constraint solver if we reuse the solution of a previous layout to warm-start the solving of a new layout. In this paper we report on experiments to test this hypothesis experimentally for three common use cases: big-step resizing, small-step resizing and constraint change. In our experiments, we measured the solving time for randomly generated GUI layout specifications of various sizes. For all three cases we found that the performance is improved if an existing solution is used as a starting solution for a new layout.
|
The overall problem, solving linear systems for constraint-based GUIs, is related to solution procedures for over-determined linear systems in general and constraint-based GUIs in particular. Several direct and iterative methods exist, which can solve over-determined systems in a least-square sense @cite_21 . Examples are QR-factorization @cite_21 , the simplex algorithm @cite_20 , the conjugate gradient method @cite_12 and the GMRES-method @cite_12 . They are the basis for solvers specifically designed to solve problems of constraint-based GUIs. Some are based on direct methods, for example HiRise and HiRise2 @cite_14 but the vast majority of existing solvers is based on convex optimization approaches and uses slack variables and an objective function @cite_15 @cite_9 @cite_3 . These methods can handle simultaneous constraints, i.e. constraints that depend on each other. In that respect they are superior to local propagation algorithms, such as DeltaBlue @cite_7 and SkyBlue @cite_18 , which cannot do so.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_15",
"@cite_20",
"@cite_12"
],
"mid": [
"1992337747",
"1993924106",
"2053637323",
"2016321606",
"",
"",
"2073536284",
"2127470768",
""
],
"abstract": [
"Many user interface toolkits use constraint solvers to maintain geometric relationships between graphic objects, or to connect the graphics to the application data structures. One efficient and flexible technique for maintaining constraints is multi-way local propagation, where constraints are represented by sets of method procedures. To satisfy a set of constraints, a local propagation solver executes one method from each constraint. SkyBlue is an incremental constraint solver that uses local propagation to maintain a set of constraints as individual constraints are added and removed. If all of the constraints cannot be satisfied, SkyBlue leaves weaker constraints unsatisfied in order to satisfy stronger constraints (maintaining a constraint hierarchy). SkyBlue is a more general successor to the DeltaBlue algorithm that satisfies cycles of methods by calling external cycle solvers and supports multi-output methods. These features make SkyBlue more useful for constructing user interfaces, since cycles of constraints can occur frequently in user interface applications and multi-output methods are necessary to represent some useful constraints. This paper discusses some of applications that use SkyBlue, presents times for some user interface benchmarks and describes the SkyBlue algorithm in detail.",
"We propose a scalable algorithm called HiRise2 for incrementally solving soft linear constraints over real domains. It is based on a framework for soft constraints, known as constraint hierarchies, to allow effective modeling of user interface applications by using hierarchical preferences for constraints. HiRise2 introduces LU decompositions to improve the scalability of an incremental simplex method. Using this algorithm, we implemented a constraint solver. We also show the results of experiments on the performance of the solver.",
"An incremental constraint solver, the DeltaBlue algorithm maintains an evolving solution to the constraint hierarchy as constraints are added and removed. DeltaBlue minimizes the cost of finding a new solution after each change by exploiting its knowledge of the last solution.",
"Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that onewindowbe to the left of another, requiring that a pane occupy the leftmost 1 3 of a window, or preferring that an object be contained within a rectangle if possible. Current constraint solvers designed for UI applications cannot efficiently handle simultaneous linear equations and inequalities. This is amajor limitation. We describe incremental algorithms based on the dual simplex and active set methods that can solve such systems of constraints efficiently.",
"",
"",
"Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that one window be to the left of another, requiring that a pane occupy the leftmost third of a window, or preferring that an object be contained within a rectangle if possible. Previous constraint solvers designed for user interface applications cannot handle simultaneous linear equations and inequalities efficiently. This is a major limitation, as such systems of constraints arise often in natural declarative specifications. We describe Cassowary---an incremental algorithm based on the dual simplex method, which can solve such systems of constraints efficiently. We have implemented the algorithm as part of a constraint-solving toolkit. We discuss the implementation of the toolkit, its application programming interface, and its performance.",
"In real-world problems related to finance, business, and management, mathematicians and economists frequently encounter optimization problems. In this classic book, George Dantzig looks at a wealth of examples and develops linear programming methods for their solutions. He begins by introducing the basic theory of linear inequalities and describes the powerful simplex method used to solve them. Treatments of the price concept, the transportation problem, and matrix methods are also given, and key mathematical concepts such as the properties of convex sets and linear vector spaces are covered.\"The author of this book was the main force in establishing a new mathematical discipline, and he has contributed to its further development at every stage and from every angle. This volume ... is a treasure trove for those who work in this field--teachers, students, and users alike. Its encyclopaedic coverage, due in part to collaboration with other experts, makes it an absolute must.\"--S. Vajda, Zentralblatt fYr Mathematik und ihre Grenzgebiete",
""
]
}
|
1401.1880
|
2952152174
|
In recent years, there has been growing focus on the study of automated recommender systems. Music recommendation systems serve as a prominent domain for such works, both from an academic and a commercial perspective. A fundamental aspect of music perception is that music is experienced in temporal context and in sequence. In this work we present DJ-MC, a novel reinforcement-learning framework for music recommendation that does not recommend songs individually but rather song sequences, or playlists, based on a model of preferences for both songs and song transitions. The model is learned online and is uniquely adapted for each listener. To reduce exploration time, DJ-MC exploits user feedback to initialize a model, which it subsequently updates by reinforcement. We evaluate our framework with human participants using both real song and playlist data. Our results indicate that DJ-MC's ability to recommend sequences of songs provides a significant improvement over more straightforward approaches, which do not take transitions into account.
|
Though not much work has attempted to model playlists directly, there has been substantial research on modeling similarity between artists and between songs. @cite_24 use semantic tags to learn a Gaussian process kernel function between pairs of songs. @cite_17 learn an embedding in a shared space of social tags, acoustic features and artist entities by optimizing an evaluation metric for various music retrieval tasks. @cite_3 model radio stations as probability distributions of items to be played, embedded in an inner-product space, using real playlist histories for training.
|
{
"cite_N": [
"@cite_24",
"@cite_3",
"@cite_17"
],
"mid": [
"2145713404",
"2027913476",
"1989445502"
],
"abstract": [
"This paper applies fast sparse multidimensional scaling (MDS) to a large graph of music similarity, with 267K vertices that represent artists, albums, and tracks; and 3.22M edges that represent similarity between those entities. Once vertices are assigned locations in a Euclidean space, the locations can be used to browse music and to generate playlists. MDS on very large sparse graphs can be effectively performed by a family of algorithms called Rectangular Dijsktra (RD) MDS algorithms. These RD algorithms operate on a dense rectangular slice of the distance matrix, created by calling Dijsktra a constant number of times. Two RD algorithms are compared: Landmark MDS, which uses the Nystrom approximation to perform MDS; and a new algorithm called Fast Sparse Embedding, which uses FastMap. These algorithms compare favorably to Laplacian Eigenmaps, both in terms of speed and embedding quality.",
"In the Internet music scene, where recommendation technology is key for navigating huge collections, large market players enjoy a considerable advantage. Accessing a wider pool of user feedback leads to an increasingly more accurate analysis of user tastes, effectively creating a \"rich get richer\" effect. This work aims at significantly lowering the entry barrier for creating music recommenders, through a paradigm coupling a public data source and a new collaborative filtering (CF) model. We claim that Internet radio stations form a readily available resource of abundant fresh human signals on music through their playlists, which are essentially cohesive sets of related tracks. In a way, our models rely on the knowledge of a diverse group of experts in lieu of the commonly used wisdom of crowds. Over several weeks, we aggregated publicly available playlists of thousands of Internet radio stations, resulting in a dataset encompassing millions of plays, and hundreds of thousands of tracks and artists. This provides the large scale ground data necessary to mitigate the cold start problem of new items at both mature and emerging services. Furthermore, we developed a new probabilistic CF model, tailored to the Internet radio resource. The success of the model was empirically validated on the collected dataset. Moreover, we tested the model at a cross-source transfer learning manner -- the same model trained on the Internet radio data was used to predict behavior of Yahoo! Music users. This demonstrates the ability to tap the Internet radio signals in other music recommendation setups. Based on encouraging empirical results, our hope is that the proposed paradigm will make quality music recommendation accessible to all interested parties in the community.",
"Abstract Music prediction tasks range from predicting tags given a song or clip of audio, predicting the name of the artist, or predicting related songs given a song, clip, artist name or tag. That is, we are interested in every semantic relationship between the different musical concepts in our database. In realistically sized databases, the number of songs is measured in the hundreds of thousands or more, and the number of artists in the tens of thousands or more, providing a considerable challenge to standard machine learning techniques. In this work, we propose a method that scales to such datasets which attempts to capture the semantic similarities between the database items by modelling audio, artist names, and tags in a single low-dimensional semantic embedding space. This choice of space is learnt by optimizing the set of prediction tasks of interest jointly using multi-task learning. Our single model learnt by training on the joint objective function is shown experimentally to have improved accur..."
]
}
|
1401.1763
|
2126223012
|
In this paper we consider the problem of approximating frequency moments in the streaming model. Given a stream @math of numbers from @math , a frequency of @math is defined as @math . The @math -th of @math is defined as @math . In this paper we give an upper bound on the space required to find a @math -th frequency moment of @math bits that matches, up to a constant factor, the lower bound of Woodruff and Zhang (STOC 12) for constant @math and constant @math . Our algorithm makes a single pass over the stream and works for any constant @math .
|
In @cite_14 , the authors observed that it is possible to approximate @math in optimal polylogarithmic space. Kane, Nelson and Woodruff @cite_49 gave a space-optimal solution for @math . Kane, Nelson, and Woodruff @cite_52 gave optimal-space results for @math . In addition to the original model of @cite_14 , a variety of different models of streams have been introduced. These models include the turnstile model (that allows insertion and deletion) @cite_16 , the sliding window model @cite_13 , and the distributed model @cite_48 @cite_19 @cite_17 . In the turnstile model, where the updates can be integers in the range @math , the latest bound by Ganguly @cite_6 is @math where @math . This bound is roughly @math for constant @math . Recently, Li and Woodruff provided a matching lower bound for @math @cite_23 . Thus, for the turnstile model, the problem has been solved optimally for @math @cite_6 @cite_23 . These results combined with our result demonstrate that the turnstile model is fundamentally different from the model of Alon, Matias, and Szegedy.
|
{
"cite_N": [
"@cite_14",
"@cite_48",
"@cite_52",
"@cite_6",
"@cite_19",
"@cite_49",
"@cite_23",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"1990465412",
"1998272044",
"1522285917",
"2950318219",
"2103126020",
"8957553",
"2045533739",
"2128846062",
"2017757263"
],
"abstract": [
"",
"This paper presents algorithms for estimating aggregate functions over a \"sliding window\" of the N most recent data items in one or more streams. Our results include: For a single stream, we present the first e-approximation scheme for the number of 1's in a sliding window that is optimal in both worst case time and space. We also present the first e for the sum of integers in [0..R] in a sliding window that is optimal in both worst case time and space (assuming R is at most polynomial in N). Both algorithms are deterministic and use only logarithmic memory words. In contrast, we show that an deterministic algorithm that estimates, to within a small constant relative error, the number of 1's (or the sum of integers) in a sliding window over the union of distributed streams requires O(N) space. We present the first randomized (e,s)-approximation scheme for the number of 1's in a sliding window over the union of distributed streams that uses only logarithmic memory words. We also present the first (e,s)-approximation scheme for the number of distinct values in a sliding window over distributed streams that uses only logarithmic memory words. < olOur results are obtained using a novel family of synopsis data structures.",
"We settle the 1-pass space complexity of (1 ± e)-approximating the Lp norm, for real p with 1 ≤ p ≤ 2, of a length-n vector updated in a length-m stream with updates to its coordinates. We assume the updates are integers in the range [--M, M]. In particular, we show the space required is Θ(e-2 log(mM) + log log(ns)) bits. Our result also holds for 0",
"We present an algorithm for computing @math , the @math th moment of an @math -dimensional frequency vector of a data stream, for @math , to within @math factors, @math with high constant probability. Let @math be the number of stream records and @math be the largest magnitude of a stream update. The algorithm uses space in bits @math where, @math . Here @math is @math for @math and @math for @math . This improves upon the space required by current algorithms iw:stoc05,bgks:soda06,ako:arxiv10,bo:arxiv10 by a factor of at least @math . The update time is @math . We use a new technique for designing estimators for functions of the form @math , where, @math is a random variable and @math is a smooth function, based on a low-degree Taylor polynomial expansion of @math around an estimate of @math .",
"We resolve several fundamental questions in the area of distributed functional monitoring, initiated by Cormode, Muthukrishnan, and Yi (SODA, 2008). In this model there are @math sites each tracking their input and communicating with a central coordinator that continuously maintain an approximate output to a function @math computed over the union of the inputs. The goal is to minimize the communication. We show the randomized communication complexity of estimating the number of distinct elements up to a @math factor is @math , improving the previous @math bound and matching known upper bounds up to a logarithmic factor. For the @math -th frequency moment @math , @math , we improve the previous @math communication bound to @math . We obtain similar improvements for heavy hitters, empirical entropy, and other problems. We also show that we can estimate @math , for any @math , using @math communication. This greatly improves upon the previous @math bound of Cormode, Muthukrishnan, and Yi for general @math , and their @math bound for @math . For @math , our bound resolves their main open question. Our lower bounds are based on new direct sum theorems for approximate majority, and yield significant improvements to problems in the data stream model, improving the bound for estimating @math in @math passes from @math to @math , giving the first bound for estimating @math in @math passes of @math bits of space that does not use the gap-hamming problem.",
"We give the first optimal algorithm for estimating the number of distinct elements in a data stream, closing a long line of theoretical research on this problem begun by Flajolet and Martin in their seminal paper in FOCS 1983. This problem has applications to query optimization, Internet routing, network topology, and data mining. For a stream of indices in 1,...,n , our algorithm computes a (1 ± e)-approximation using an optimal O(1 e-2 + log(n)) bits of space with 2 3 success probability, where 0 We also give an algorithm to estimate the Hamming norm of a stream, a generalization of the number of distinct elements, which is useful in data cleaning, packet tracing, and database auditing. Our algorithm uses nearly optimal space, and has optimal O(1) update and reporting times.",
"We show an Ω((n 1 − 2 p logM) e 2) bits of space lower bound for (1 + e)-approximating the p-th frequency moment (F_p = |x |_p^p = i=1 ^n |x_i|^p ) of a vector x ∈ − M, − M + 1, …, M n with constant probability in the turnstile model for data streams, for any p > 2 and e ≥ 1 n 1 p (we require e ≥ 1 n 1 p since there is a trivial O(n logM) upper bound). This lower bound matches the space complexity of an upper bound of Ganguly for any e 2 and e ≥ 1 n 1 p . This is again optimal for e < 1 log O(1) n.",
"In this article, we show several results obtained by combining the use of stable distributions with pseudorandom generators for bounded space. In particular:---We show that, for any p ∈ (0, 2], one can maintain (using only O(log n e2) words of storage) a sketch C(q) of a point q ∈ lnp under dynamic updates of its coordinates. The sketch has the property that, given C(q) and C(s), one can estimate Vq − sVp up to a factor of (1 p e) with large probability. This solves the main open problem of [1999].---We show that the aforementioned sketching approach directly translates into an approximate algorithm that, for a fixed linear mapping A, and given x ∈ ℜn and y ∈ ℜm, estimates VAx − yVp in O(n p m) time, for any p ∈ (0, 2]. This generalizes an earlier algorithm of Wasserman and Blum [1997] which worked for the case p e 2.---We obtain another sketch function C′ which probabilistically embeds ln1 into a normed space lm1. The embedding guarantees that, if we set m e log(1 Δ)O(1 e), then for any pair of points q, s ∈ ln1, the distance between q and s does not increase by more than (1 p e) with constant probability, and it does not decrease by more than (1 − e) with probability 1 − Δ. This is the only known dimensionality reduction theorem for the l1 norm. In fact, stronger theorems of this type (i.e., that guarantee very low probability of expansion as well as of contraction) cannot exist [Brinkman and Charikar 2003].---We give an explicit embedding of ln2 into lnO(log n)1 with distortion (1 p 1 nΘ(1)).",
"In the streaming model elements arrive sequentially and can be observed only once. Maintaining statistics and aggregates is an important and non-trivial task in the model. This becomes even more challenging in the sliding windows model, where statistics must be maintained only over the most recent n elements. In their pioneering paper, Datar, Gionis, Indyk and Motwani [15] presented exponential histograms, an effective method for estimating statistics on sliding windows. In this paper we present a new smooth histograms method that improves the approximation error rate obtained via exponential histograms. Furthermore, our smooth histograms method not only captures and improves multiple previous results on sliding windows bur also extends the class functions that can be approximated on sliding windows. In particular, we provide the first approximation algorithms for the following functions: Lp norms for p notin [1,2], frequency moments, length of increasing subsequence and geometric mean.",
"In the model of continuous distributed monitoring, a number of observers each see a stream of observations. Their goal is to work together to compute a function of the union of their observations. This can be as simple as counting the total number of observations, or more complex non-linear functions such as tracking the entropy of the induced distribution. Assuming that it is too costly to simply centralize all the observations, it becomes quite challenging to design solutions which provide a good approximation to the current answer, while bounding the communication cost of the observers, and their other resources such as their space usage. This survey introduces this model, and describe a selection results in this setting, from the simple counting problem to a variety of other functions that have been studied."
]
}
|
1401.1711
|
2566513994
|
Communication networks are often designed and analyzed assuming tight synchronization among nodes. However, in applications that require communication in the energy-efficient regime of low signal-to-noise ratios, establishing tight synchronization among nodes in the network can result in a significant energy overhead. Motivated by a recent result showing that near-optimal energy efficiency can be achieved over the AWGN channel without requiring tight synchronization, we consider the question of whether the potential gains of cooperative communication can be achieved in the absence of synchronization. We focus on the symmetric Gaussian diamond network and establish that cooperative-communication gains are indeed feasible even with unsynchronized nodes. More precisely, we show that the capacity per unit energy of the unsynchronized symmetric Gaussian diamond network is within a constant factor of the capacity per unit energy of the corresponding synchronized network. To this end, we propose a distributed relaying scheme that does not require tight synchronization but nevertheless achieves most of the energy gains of coherent combining.
|
The insertion deletion channel as a model for communication with synchronization errors was introduced by Dobrushin in @cite_22 . Finding the capacity of this channel is still an open problem, the main difficulty being the channel memory introduced by the insertions and deletions. As a result of the difficulty in analyzing the general insertion deletion channel, most of the literature focuses on a simpler special case, the binary deletion channel, for which good approximations have been developed. However, the exact capacity for this special case too remains elusive. We refer the reader to @cite_17 and references therein for a survey on this topic up to 2009. Some of the work that has appeared since 2009 is @cite_9 , @cite_6 , @cite_4 , @cite_18 , @cite_20 .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_9",
"@cite_6",
"@cite_20",
"@cite_17"
],
"mid": [
"1997164983",
"2763035067",
"",
"2950191682",
"2121189333",
"2131586609",
"2125870772"
],
"abstract": [
"We study the binary deletion channel where each input bit is independently deleted according to a fixed probability. We relate the conditional probability distribution of the output of the deletion channel given the input to the hidden pattern matching problem. This yields a new characterization of the mutual information between the input and output of the deletion channel. Through this characterization we are able to comment on the the deletion channel capacity, in particular for deletion probabilities approaching 0 and 1.",
"This paper considers a binary channel with deletions and insertions, where each input bit is transformed in one of the following ways: it is deleted with probability d, or an extra bit is added after it with probability i, or it is transmitted unmodified with probability 1-d-i. A computable lower bound on the capacity of this channel is derived. The transformation of the input sequence by the channel may be viewed in terms of runs as follows: some runs of the input sequence get shorter longer, some runs get deleted, and some new runs are added. It is difficult for the decoder to synchronize the channel output sequence to the transmitted codeword mainly due to deleted runs and new inserted runs. The main idea is a mutual information decomposition in terms of the rate achieved by a suboptimal decoder that determines the positions of the deleted and inserted runs in addition to decoding the transmitted codeword. The mutual information between the channel input and output sequences is expressed as the sum of the rate achieved by this decoder and the rate loss due to its suboptimality. Obtaining computable lower bounds on each of these quantities yields a lower bound on the capacity. The bounds proposed in this paper provide the first characterization of achievable rates for channels with general insertions, and for channels with both deletions and insertions. For the special case of the deletion channel, the proposed bound improves on the previous best lower bound for deletion probabilities up to 0.3.",
"",
"The deletion channel is the simplest point-to-point communication channel that models lack of synchronization. Despite significant effort, little is known about its capacity, and even less about optimal coding schemes. In this paper we intiate a new systematic approach to this problem, by demonstrating that capacity can be computed in a series expansion for small deletion probability. We compute two leading terms of this expansion, and show that capacity is achieved, up to this order, by i.i.d. uniform random distribution of the input. We think that this strategy can be useful in a number of capacity calculations.",
"In this paper, we consider the capacity C of the binary deletion channel for the limiting case where the deletion probability p goes to 0. It is known that for any p < 1 2, the capacity satisfies C ≥ 1−H(p), where H is the standard binary entropy. We show that this lower bound is essentially tight in the limit, by providing an upper bound C ≤ 1−(1−o(1))H(p), where the o(1) term is understood to be vanishing as p goes to 0. Our proof utilizes a natural counting argument that should prove helpful in analyzing related channels.",
"We propose a new channel model for channels with synchronization errors. Using this model, we give simple, non-trivial and, in some cases, tight lower bounds on the capacity for certain synchronization error channels.",
""
]
}
|
1401.1711
|
2566513994
|
Communication networks are often designed and analyzed assuming tight synchronization among nodes. However, in applications that require communication in the energy-efficient regime of low signal-to-noise ratios, establishing tight synchronization among nodes in the network can result in a significant energy overhead. Motivated by a recent result showing that near-optimal energy efficiency can be achieved over the AWGN channel without requiring tight synchronization, we consider the question of whether the potential gains of cooperative communication can be achieved in the absence of synchronization. We focus on the symmetric Gaussian diamond network and establish that cooperative-communication gains are indeed feasible even with unsynchronized nodes. More precisely, we show that the capacity per unit energy of the unsynchronized symmetric Gaussian diamond network is within a constant factor of the capacity per unit energy of the corresponding synchronized network. To this end, we propose a distributed relaying scheme that does not require tight synchronization but nevertheless achieves most of the energy gains of coherent combining.
|
The notion of capacity per unit energy was analyzed for the synchronized AWGN channel in @cite_3 . It is shown there that pulse-position modulation achieves the capacity per unit energy of this channel. Single-letter characterizations for the capacity per unit cost for general cost functions and general synchronized discrete memoryless channels were found in @cite_10 . However, these results depend on the channel being memoryless, whereas the channel considered here has memory due to the presence of insertions and deletions. Hence, these results cannot be directly applied here. The pulse-position modulation scheme was generalized for unsynchronized channels in @cite_5 , where it was shown to achieve the capacity per unit energy of the unsynchronized AWGN channel. We point out that, pulse-position modulation itself cannot be extended to the unsynchronized diamond network considered in this paper. Indeed, unless the relays decode the message sent by the transmitter, they end up spending a significant amount of energy forwarding noise. However, requiring the relays to decode the message can also be suboptimal---for instance, when the source-relay channel is weak.
|
{
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_3"
],
"mid": [
"1906375359",
"2130128605",
""
],
"abstract": [
"Communication systems are traditionally designed to have tight transmitter-receiver synchronization. This requirement has negligible overhead in the high-SNR regime. However, in many applications, such as wireless sensor networks, communication needs to happen primarily in the energy-efficient regime of low SNR, where requiring tight synchronization can be highly suboptimal. In this paper, we model the noisy channel with synchronization errors as an insertion deletion substitution channel. For this channel, we propose a new communication scheme that requires only loose transmitter-receiver synchronization. We show that the proposed scheme is asymptotically optimal for the Gaussian channel with synchronization errors in terms of energy efficiency as measured by the rate per unit energy. In the process, we also establish that the lack of synchronization causes negligible loss in energy efficiency. We further show that, for a general discrete memoryless channel with synchronization errors and a general cost function (with a zero-cost symbol) on the input, the rate per unit cost achieved by the proposed scheme is within a factor two of the information-theoretic optimum.",
"Memoryless communication channels with arbitrary alphabets where each input symbol is assigned a cost are considered. The maximum number of bits that can be transmitted reliably through the channel per unit cost is studied. It is shown that, if the input alphabet contains a zero-cost symbol, then the capacity per unit cost admits a simple expression as the maximum normalized divergence between two conditional output distributions. The direct part of this coding theorem admits a constructive proof via Stein's lemma on the asymptotic error probability of binary hypothesis tests. Single-user, multiple-access, and interference channels are studied. >",
""
]
}
|
1401.1711
|
2566513994
|
Communication networks are often designed and analyzed assuming tight synchronization among nodes. However, in applications that require communication in the energy-efficient regime of low signal-to-noise ratios, establishing tight synchronization among nodes in the network can result in a significant energy overhead. Motivated by a recent result showing that near-optimal energy efficiency can be achieved over the AWGN channel without requiring tight synchronization, we consider the question of whether the potential gains of cooperative communication can be achieved in the absence of synchronization. We focus on the symmetric Gaussian diamond network and establish that cooperative-communication gains are indeed feasible even with unsynchronized nodes. More precisely, we show that the capacity per unit energy of the unsynchronized symmetric Gaussian diamond network is within a constant factor of the capacity per unit energy of the corresponding synchronized network. To this end, we propose a distributed relaying scheme that does not require tight synchronization but nevertheless achieves most of the energy gains of coherent combining.
|
Since its introduction in @cite_14 , there have been numerous works analyzing the capacity of the synchronized Gaussian diamond network. The achievable rates of two well-known relaying schemes, decode-forward and amplify-forward, were analyzed for the two-relay diamond network in @cite_14 . To counter the poor performance of amplify-forward at low SNR, the bursty amplify-forward scheme was proposed in @cite_1 . This scheme was shown to be approximately optimal for the symmetric Gaussian diamond network with arbitrary number @math of relays in @cite_15 , both in the sense of a uniform additive gap of @math bits and a uniform multiplicative gap of a factor @math .
|
{
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_1"
],
"mid": [
"2548811864",
"2154164662",
"1528321597"
],
"abstract": [
"We consider the Gaussian “diamond” or parallel relay network, in which a source node transmits a message to a destination node with the help of N relays. Even for the symmetric setting, in which the channel gains to the relays are identical and the channel gains from the relays are identical, the capacity of this channel is unknown in general. The best known capacity approximation is up to an additive gap of order N bits and up to a multiplicative gap of order N2, with both gaps independent of the channel gains. In this paper, we approximate the capacity of the symmetric Gaussian N-relay diamond network up to an additive gap of 1.8 bits and up to a multiplicative gap of a factor 14. Both gaps are independent of the channel gains and, unlike the best previously known result, are also independent of the number of relays N in the network. Achievability is based on bursty amplify-and-forward, showing that this simple scheme is uniformly approximately optimal, both in the low-rate as well as in the high-rate regimes. The upper bound on capacity is based on a careful evaluation of the cut-set bound. We also present approximation results for the asymmetric Gaussian N-relay diamond network. In particular, we show that bursty amplify-and-forward combined with optimal relay selection achieves a rate within a factor O(log4(N)) of capacity with preconstant in the order notation independent of the channel gains.",
"We introduce the real, discrete-time Gaussian parallel relay network. This simple network is theoretically important in the context of network information theory. We present upper and lower bounds to capacity and explain where they coincide.",
""
]
}
|
1401.1711
|
2566513994
|
Communication networks are often designed and analyzed assuming tight synchronization among nodes. However, in applications that require communication in the energy-efficient regime of low signal-to-noise ratios, establishing tight synchronization among nodes in the network can result in a significant energy overhead. Motivated by a recent result showing that near-optimal energy efficiency can be achieved over the AWGN channel without requiring tight synchronization, we consider the question of whether the potential gains of cooperative communication can be achieved in the absence of synchronization. We focus on the symmetric Gaussian diamond network and establish that cooperative-communication gains are indeed feasible even with unsynchronized nodes. More precisely, we show that the capacity per unit energy of the unsynchronized symmetric Gaussian diamond network is within a constant factor of the capacity per unit energy of the corresponding synchronized network. To this end, we propose a distributed relaying scheme that does not require tight synchronization but nevertheless achieves most of the energy gains of coherent combining.
|
The constant multiplicative approximation guarantee provided in @cite_15 implies that bursty amplify-and-forward also achieves the capacity of the synchronized symmetric Gaussian diamond per unit energy up to the same multiplicative gap. For the symmetric diamond networks with only two relays, it was shown in @cite_7 that bursty amplify-forward and superposition-partial-decode-forward achieve the capacity per unit energy up to a factor of at most @math and at most @math , respectively. The capacity per unit energy of the canonical synchronized single-relay channel was approximated to within a factor @math in @cite_0 . A bursty amplify-and-forward scheme was also shown to achieve the optimal outage capacity per unit energy for the frequency-division relay channel at low SNR and at low outage probability @cite_16 . It is worth pointing out that the bursty constructions mentioned here do not generalize to the unsynchronized diamond network considered in this paper, since, due to the synchronization errors, the bursty codewords cannot be made to combine constructively at the destination.
|
{
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_16",
"@cite_7"
],
"mid": [
"2143157029",
"2548811864",
"2154056004",
"1986946279"
],
"abstract": [
"Upper and lower bounds on the capacity and minimum energy-per-bit for general additive white Gaussian noise (AWGN) and frequency-division AWGN (FD-AWGN) relay channel models are established. First, the max-flow min-cut bound and the generalized block-Markov coding scheme are used to derive upper and lower bounds on capacity. These bounds are never tight for the general AWGN model and are tight only under certain conditions for the FD-AWGN model. Two coding schemes that do not require the relay to decode any part of the message are then investigated. First, it is shown that the \"side-information coding scheme\" can outperform the block-Markov coding scheme. It is also shown that the achievable rate of the side-information coding scheme can be improved via time sharing. In the second scheme, the relaying functions are restricted to be linear. The problem is reduced to a \"single-letter\" nonconvex optimization problem for the FD-AWGN model. The paper also establishes a relationship between the minimum energy-per-bit and capacity of the AWGN relay channel. This relationship together with the lower and upper bounds on capacity are used to establish corresponding lower and upper bounds on the minimum energy-per-bit that do not differ by more than a factor of 1.45 for the FD-AWGN relay channel model and 1.7 for the general AWGN model.",
"We consider the Gaussian “diamond” or parallel relay network, in which a source node transmits a message to a destination node with the help of N relays. Even for the symmetric setting, in which the channel gains to the relays are identical and the channel gains from the relays are identical, the capacity of this channel is unknown in general. The best known capacity approximation is up to an additive gap of order N bits and up to a multiplicative gap of order N2, with both gaps independent of the channel gains. In this paper, we approximate the capacity of the symmetric Gaussian N-relay diamond network up to an additive gap of 1.8 bits and up to a multiplicative gap of a factor 14. Both gaps are independent of the channel gains and, unlike the best previously known result, are also independent of the number of relays N in the network. Achievability is based on bursty amplify-and-forward, showing that this simple scheme is uniformly approximately optimal, both in the low-rate as well as in the high-rate regimes. The upper bound on capacity is based on a careful evaluation of the cut-set bound. We also present approximation results for the asymmetric Gaussian N-relay diamond network. In particular, we show that bursty amplify-and-forward combined with optimal relay selection achieves a rate within a factor O(log4(N)) of capacity with preconstant in the order notation independent of the channel gains.",
"In slow-fading scenarios, cooperation between nodes can increase the amount of diversity for communication. We study the performance limit in such scenarios by analyzing the outage capacity of slow fading relay channels. Our focus is on the low signal-to-noise ratio (SNR) and low outage probability regime, where the adverse impact of fading is greatest but so are the potential gains from cooperation. We showed that while the standard Amplify-Forward protocol performs very poorly in this regime, a modified version we called the Bursty Amplify-Forward protocol is optimal and achieves the outage capacity of the network. Moreover, this performance can be achieved without a priori channel knowledge at the receivers. In contrast, the Decode-Forward protocol is strictly suboptimal in this regime. Our results directly yield the outage capacity per unit energy of fading relay channels",
"A new communication scheme for Gaussian parallel relay networks based on superposition coding and partial decoding at the relays is presented. Some specific examples are proposed in which two codebook layers are superimposed. The first-level codebook is constructed with symbols from a binary or ternary alphabet, while the second-level codebook is composed of codewords chosen with Gaussian symbols. The new communication scheme is a generalization of decode-and-forward, amplify-and-forward, and bursty-amplify-and-forward. The asymptotic low-signal-to-noise-ratio regime is studied using achievable rates and minimum energy-per-bit as performance metrics. It is shown that the new scheme outperforms all previously known schemes for some channels and parameter ranges."
]
}
|
1401.1711
|
2566513994
|
Communication networks are often designed and analyzed assuming tight synchronization among nodes. However, in applications that require communication in the energy-efficient regime of low signal-to-noise ratios, establishing tight synchronization among nodes in the network can result in a significant energy overhead. Motivated by a recent result showing that near-optimal energy efficiency can be achieved over the AWGN channel without requiring tight synchronization, we consider the question of whether the potential gains of cooperative communication can be achieved in the absence of synchronization. We focus on the symmetric Gaussian diamond network and establish that cooperative-communication gains are indeed feasible even with unsynchronized nodes. More precisely, we show that the capacity per unit energy of the unsynchronized symmetric Gaussian diamond network is within a constant factor of the capacity per unit energy of the corresponding synchronized network. To this end, we propose a distributed relaying scheme that does not require tight synchronization but nevertheless achieves most of the energy gains of coherent combining.
|
Different types of synchronization errors have been considered in the literature. Insertion deletion channels as used in this paper model clock drift and jitter at the symbol level. A different approach is to assume that the clocks at different nodes only differ by a constant offset with respect to each other. For such errors, the offset is typically either assumed to be equal to a multiple of the symbol interval (frame asynchronism @cite_2 ) or equal to a length less than one symbol interval (symbol asynchronism). The multiple-access channel capacity region under symbol asynchronism and frame asynchronism has been analyzed in @cite_19 and @cite_11 @cite_8 , respectively. Several recent works, @cite_13 and @cite_21 , address the energy efficiency of bursty data communication. Here, asynchronism is not in the sense of symbol or frame asynchronism as described above, but in the sense that data arrives at the source sporadically at some random time unknown beforehand.
|
{
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_19",
"@cite_2",
"@cite_13",
"@cite_11"
],
"mid": [
"2032654992",
"1970875440",
"2108913609",
"1965374299",
"2144128638",
"2113831696"
],
"abstract": [
"It is shown that the capacity region of the asynchronous multiple-access channel differs from that of the synchronous channel only by the lack of a convex hull operation.",
"When data traffic in a wireless network is bursty, small amounts of data sporadically become available for transmission, and the energy cost associated with synchronizing the network nodes prior to each communication block is not negligible. Therefore, designing energy-efficient communication schemes for such asynchronous scenarios is of particular importance. In this paper, we show that, for symmetric diamond networks, by performing the tasks of synchronization and communication separately, it is possible to achieve the minimum energy-per-bit to within a factor that ranges from 2 in the synchronous case to 1 in the highly asynchronous regime.",
"An equivalent discrete-time Gaussian channel parametrized by the signal cross-correlations is derived to obtain an equivalent channel model with discrete-time outputs. The main feature introduced by the lack of symbol synchronism is that the channel has memory. This is due to the overlap of each symbol transmitted by a user with two consecutive symbols transmitted by the other user. It is shown that if the transmitters are assigned the same waveform, symbol asynchronism has no effect on the two-user capacity region of the white Gaussian channel which is equal to the Cover-Syner pentagon, whereas if the assigned waveforms are different (e.g., code division multiple access), the symbol-asynchronous capacity region is no longer a pentagon. An alternative representation of the capacity region which results in a particularly compact characterization of the fundamental limits of the multiple-access channel in the region of signal-to-noise ratios is also considered. >",
"This paper considers the optimum method for locating a sync word periodically imbedded in binary data and received over the additive white Gaussian noise channel. It is shown that the optimum rule is to select the location that maximizes the sum of the correlation and a correction term. Simulations are reported that show approximately a 3-dB improvement at interesting signal-to-noise ratios compared to a pure correlation rule. Extensions are given to the \"phase-shift keyed (PSK) sync\" case where the detector output has a binary ambiguity and to the case of Gaussian data.",
"The capacity per unit cost, or, equivalently, the minimum cost to transmit one bit, is a well-studied quantity under the assumption of full synchrony between the transmitter and the receiver. In many applications, such as sensor networks, transmissions are very bursty, with amounts of bits arriving infrequently at random times. In such scenarios, the cost of acquiring synchronization is significant and one is interested in the fundamental limits on communication without assuming a priori synchronization. In this paper, the minimum cost to transmit B bits of information asynchronously is shown to be equal to (B +H) ksync, where ksync is the synchronous minimum cost per bit and H is a measure of timing uncertainty equal to the entropy for most reasonable arrival time distributions. This result holds when the transmitter can stay idle at no cost and is a particular case of a general result which holds for arbitrary cost functions.",
"The capacity region for the discrete memoryless multiple-access channel without time synchronization at the transmitters and receivers is shown to be the same as the known capacity region for the ordinary multiple-access channel. The proof utilizes time sharing of two optimal codes for the ordinary multiple-access channel and uses maximum likelihood decoding over shifts of the hypothesized transmitter words."
]
}
|
1401.1711
|
2566513994
|
Communication networks are often designed and analyzed assuming tight synchronization among nodes. However, in applications that require communication in the energy-efficient regime of low signal-to-noise ratios, establishing tight synchronization among nodes in the network can result in a significant energy overhead. Motivated by a recent result showing that near-optimal energy efficiency can be achieved over the AWGN channel without requiring tight synchronization, we consider the question of whether the potential gains of cooperative communication can be achieved in the absence of synchronization. We focus on the symmetric Gaussian diamond network and establish that cooperative-communication gains are indeed feasible even with unsynchronized nodes. More precisely, we show that the capacity per unit energy of the unsynchronized symmetric Gaussian diamond network is within a constant factor of the capacity per unit energy of the corresponding synchronized network. To this end, we propose a distributed relaying scheme that does not require tight synchronization but nevertheless achieves most of the energy gains of coherent combining.
|
Another model for asynchronism has been proposed and analyzed in @cite_12 . Here, the asynchronism is modeled by stretching or shrinking the continuous-time transmitted signal by a time-dependent factor. This model can be thought of as the continuous-time analog of the asynchronism model that we use, however a number of additional assumptions are required to make the continuous-time problem mathematically tractable: the compress stretch factor is assumed to lie between two positive numbers (i.e. bounded), the communication system is assumed to be noiseless and have infinite bandwidth, the allowed run-lengths lie in a closed positive interval and the input alphabet is finite.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2115885621"
],
"abstract": [
"We introduce the continuous time asynchronous channel as a model for time jitter in a communication system with no common clock between the transmitter and the receiver. We have obtained a simple characterization for an optimal zero-error self-synchronizable code for the asynchronous channel. The capacity of this channel is determined by both a combinatorial approach and a probabilistic approach. Our results unveil the somewhat surprising fact that it is not necessary for the receiver clock to resynchronize with the transmitter clock within a fixed maximum time in order to achieve reliable communication. This means that no upper limit should be imposed on the run lengths of the self-synchronization code as in the case of run-length limited (RLL) codes which are commonly used in magnetic recording."
]
}
|
1401.1803
|
1782560969
|
Recent work on learning multilingual word representations usually relies on the use of word-level alignements (e.g. infered with the help of GIZA++) between translated sentences, in order to align the word embeddings in different languages. In this workshop paper, we investigate an autoencoder model for learning multilingual word representations that does without such word-level alignements. The autoencoder is trained to reconstruct the bag-of-word representation of given sentence from an encoded representation extracted from its translation. We evaluate our approach on a multilingual document classification task, where labeled data is available only for one language (e.g. English) while classification must be performed in a different language (e.g. French). In our experiments, we observe that our method compares favorably with a previously proposed method that exploits word-level alignments to learn word representations.
|
We mentioned that recent work has considered the problem of learning multilingual representations of words and usually relies on word-level alignments. propose to train simultaneously two neural network languages models, along with a regularization term that encourages pairs of frequently aligned words to have similar word embeddings. use a similar approach, with a different form for the regularizor and neural network language models as in @cite_7 . In our work, we specifically investigate whether a method that does not rely on word-level alignments can learn comparably useful multilingual embeddings in the context of document classification.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2158899491"
],
"abstract": [
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements."
]
}
|
1401.1526
|
1596450573
|
In the generalized Russian cards problem, we have a card deck @math of @math cards and three participants, Alice, Bob, and Cathy, dealt @math , @math , and @math cards, respectively. Once the cards are dealt, Alice and Bob wish to privately communicate their hands to each other via public announcements, without the advantage of a shared secret or public key infrastructure. Cathy should remain ignorant of all but her own cards after Alice and Bob have made their announcements. Notions for Cathy's ignorance in the literature range from Cathy not learning the fate of any individual card with certainty (weak @math -security) to not gaining any probabilistic advantage in guessing the fate of some set of @math cards (perfect @math -security). As we demonstrate, the generalized Russian cards problem has close ties to the field of combinatorial designs, on which we rely heavily, particularly for perfect security notions. Our main result establishes an equivalence between perfectly @math -secure strategies and @math -designs on @math points with block size @math , when announcements are chosen uniformly at random from the set of possible announcements. We also provide construction methods and example solutions, including a construction that yields perfect @math -security against Cathy when @math . We leverage a known combinatorial design to construct a strategy with @math , @math , and @math that is perfectly @math -secure. Finally, we consider a variant of the problem that yields solutions that are easy to construct and optimal with respect to both the number of announcements and level of security achieved. Moreover, this is the first method obtaining weak @math -security that allows Alice to hold an arbitrary number of cards and Cathy to hold a set of @math cards. Alternatively, the construction yields solutions for arbitrary @math , @math and any @math .
|
The Russian cards problem and variants of it has received a fair amount of attention in the literature, with focus ranging from possible applications to key generation @cite_19 @cite_1 @cite_7 @cite_10 @cite_0 @cite_8 @cite_2 @cite_9 @cite_15 , to analyses based on epistemic logic @cite_14 @cite_18 @cite_20 @cite_4 , to card deals with more than three players @cite_13 @cite_17 . Of more relevance to our work is the recent research that takes a combinatorial approach @cite_24 @cite_16 @cite_15 @cite_9 @cite_5 @cite_25 , on which we now focus.
|
{
"cite_N": [
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_20",
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_17",
"@cite_7",
"@cite_19",
"@cite_16",
"@cite_25",
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_24",
"@cite_13"
],
"mid": [
"",
"1608583290",
"",
"2061333098",
"2153157844",
"1594719402",
"",
"50630933",
"1694772419",
"1569180537",
"285496042",
"1643493712",
"2141368734",
"",
"230082676",
"1516788100",
"2041435496",
"165070137",
"2027785398"
],
"abstract": [
"",
"Consider three players Alice, Bob and Cath who hold a, b and c cards, respectively, from a deck of d = a+b+c cards. The cards are all different and players only know their own cards. Suppose Alice and Bob wish to communicate their cards to each other without Cath learning whether Alice or Bob holds a specific card. Considering the cards as consecutive natural numbers 0, 1, . . . , we investigate general conditions for when Alice or Bob can safely announce the sum of the cards they hold modulo an appropriately chosen integer. We demonstrate that this holds whenever a, b > 2 and c = 1. Because Cath holds a single card, this also implies that Alice and Bob will learn the card deal from the other player’s announcement.",
"",
"",
"We implement a specific protocol for bit exchange among card-playing agents in three different state-of-the-art epistemic model checkers and compare the results.",
"In this paper it is shown that no public announce- ment scheme that can be modeled in Dynamic Epistemic Logic (DEL) can solve the Russian Cards Problem (RCP) in one announcement. Since DEL is a general model for any public announcement scheme (11), (3), (6), (21), (12) we conclude that there exist no single announcement solution to the RCP. The proof demonstrates the utility of DEL in proving lower bounds for communication protocols. It is also shown that a general version of RCP has no two announcement solution when the adversary has sufficiently large number of cards.",
"",
"Using a random deal of cards to players and a computationally unlimited eavesdropper, all players wish to share a one-bit secret key which is information-theoretically secure from the eavesdropper. This can be done by a protocol to make several pairs of players share one-bit secret keys so that all these pairs form a tree over players. In this paper we obtain a necessary and sufficient condition on the number of cards for the existence of such a protocol.",
"This paper is concerned with public communication with the Russian Cards protocol. First, a couple of small flaws in [10] are corrected. Then an improved Russian Cards protocol is presented. As a case study, R(6, 31)(6 players and 31 cards) protocol is used to generate a common password for 5 parties who wish to access a shared file over the Internet.",
"",
"Abstract : Protocols are developed and analyzed for transmitting a secret bit between a sender and a receiver process using only the information contained in a random deal of hands of specified sizes from a deck of n distinct cards. The sender's and receiver's algorithms are known i advance, and all conversation between sender and receiver is public and is heard by all. A correct protocol always succeeds in transmitting the secret bit, and the other player(s), who receive the remaining cards and are assumed to have unlimited computing power, gain no information whatsoever about the value of the secret bit. In other words, their probability of correctly guessing the secret is bit exactly the same after listening to a run of the protocol as it was before. Both randomized and deterministic protocols are considered. A randomized protocol is described which works whenever the sender's and receiver's hands comprise a constant fraction of the deck, for all sufficiently large decks. A deterministic protocol is also described, but it requires the sender and receiver to each have approximately 44 of the cards. A general condition is presented that provides a lower bound on sizes of the sender's and receiver's hands in order for a protocol to exist. There is still a considerable gap between the upper and lower bounds, and improving the bounds remains an open problem. (kr)",
"Public key cryptography bases its security on mathematical problems that are computationally hard to solve. There are also cryptographic protocols wherein discovering the secret is not too complex given the current state of technology, but logically impossible. One approach involves the use of a random deck of cards for such protocols. The messages in the protocol consist of announcements made by the two players attempting to exchange a secret. In that approach one may require that these two players can communicate their holding of cards to each other without the remaining players (eavesdroppers) learning a single card in these holdings. In past work we described combinatorial axioms to achieve that. The eavesdroppers may still be able to make an educated guess about individual card occurrence (card ownership). In this work we focus on overcoming such bias. From the perspective of cryptography there are two ways to do that: either use protocols that produce announcements that are unbiased for card occurrence, or use protocols that ensure that there is no relation between patterns in the announcement, such as card occurrence, and the actual holding. We focus on the first. We devise an additional requirement for the announcement in order to eliminate the possibility of making educated guesses. To that effect we propose an additional combinatorial axiom CA4, and we give a method to design announcements that meet this requirement. Additionally, we present unbiased protocols for incidental cases.",
"We present the first formal mathematical presentation of the generalized Russian cards problem, and provide rigorous security definitions that capture both basic and extended versions of weak and perfect security notions. In the generalized Russian cards problem, three players, Alice, Bob, and Cathy, are dealt a deck of @math cards, each given @math , @math , and @math cards, respectively. The goal is for Alice and Bob to learn each other's hands via public communication, without Cathy learning the fate of any particular card. The basic idea is that Alice announces a set of possible hands she might hold, and Bob, using knowledge of his own hand, should be able to learn Alice's cards from this announcement, but Cathy should not. Using a combinatorial approach, we are able to give a nice characterization of informative strategies (i.e., strategies allowing Bob to learn Alice's hand), having optimal communication complexity, namely the set of possible hands Alice announces must be equivalent to a large set of @math -designs, where @math . We also provide some interesting necessary conditions for certain types of deals to be simultaneously informative and secure. That is, for deals satisfying @math for some @math , where @math and the strategy is assumed to satisfy a strong version of security (namely perfect @math -security), we show that @math and hence @math . We also give a precise characterization of informative and perfectly @math -secure deals of the form @math satisfying @math involving @math -designs.",
"",
"Given an interpreted system, we investigate ways for two agents to communicate secrets by public announcements. For card deals, the problem to keep all of your cards a secret (i) can be distinguished from the problem to keep some of your cards a secret (ii). For (i): we characterize a novel class of protocols consisting of two announcements, for the case where two agents both hold n cards and the third agent a single card; the communicating agents announce the sum of their cards modulo 2n + 1. For (ii): we show that the problem to keep at least one of your cards a secret is equivalent to the problem to keep your local state (hand of cards) a secret; we provide a large class of card deals for which exchange of secrets is possible; and we give an example for which there is no protocol of less than three announcements.",
"We consider the problem of multiparty secret key exchange. A \"team\" of players P1 through Pk wishes to determine an n-bit secret key in the presence of a computationally unlimited eavesdropper, Eve. The team players are dealt hands of cards of prespecified sizes from a deck of d distinct cards; any remaining cards are dealt to Eve. We explore how the team can use the information contained in their hands of cards to determine an n-bit key that is secret from Eve, that is, an n bit string which each team player knows exactly but for which Eve's probability of guessing the key correctly is 1 2n both before and after she hears the communication between the team players. We describe randomized protocols for secret key exchange that work for certain classes of deals, and we present some conditions on the deal for such a protocol to exist.",
"We present a general model for communication among a \"team\" of players overheard by a passive eavesdropper, Eve, in which all players including Eve are given private inputs that may be correlated. We define and explore secret key exchange in this model. Our secrecy requirements are information-theoretic and hold even if Eve is computationally unlimited. In particular, we consider the situation in which the team players are dealt hands of cards of prespecified sizes from a known deck of distinct cards. We explore when the team players can use the information contained in their hands to determine a value that each team player knows exactly but Eve cannot guess.",
"Two parties A and B select a cards and b cards from a known deck and a third party C receives the remaining c cards. We consider methods whereby A can, in a single message, publicly inform B of her hand without C learning any card held by A or by B. Conditions on a, b, c are given for the existence of an appropriate message.",
"This paper investigates Russian Cards problem for the purpose of unconditional secure communication. First, a picking rule and deleting rule as well as safe communication condition are given to deal with the problem with 3 players and 7 cards. Further, the problem is generalized to tackle n players and n(n−1)+1 cards. A new picking rule for constructing the announcement is presented, and a new deleting rule for players to determine each other’s cards is formalized. Moreover, the safe communication condition is proved. In addition, to illustrate the approach, an example for 5 players and 21 cards is presented in detail."
]
}
|
1401.1526
|
1596450573
|
In the generalized Russian cards problem, we have a card deck @math of @math cards and three participants, Alice, Bob, and Cathy, dealt @math , @math , and @math cards, respectively. Once the cards are dealt, Alice and Bob wish to privately communicate their hands to each other via public announcements, without the advantage of a shared secret or public key infrastructure. Cathy should remain ignorant of all but her own cards after Alice and Bob have made their announcements. Notions for Cathy's ignorance in the literature range from Cathy not learning the fate of any individual card with certainty (weak @math -security) to not gaining any probabilistic advantage in guessing the fate of some set of @math cards (perfect @math -security). As we demonstrate, the generalized Russian cards problem has close ties to the field of combinatorial designs, on which we rely heavily, particularly for perfect security notions. Our main result establishes an equivalence between perfectly @math -secure strategies and @math -designs on @math points with block size @math , when announcements are chosen uniformly at random from the set of possible announcements. We also provide construction methods and example solutions, including a construction that yields perfect @math -security against Cathy when @math . We leverage a known combinatorial design to construct a strategy with @math , @math , and @math that is perfectly @math -secure. Finally, we consider a variant of the problem that yields solutions that are easy to construct and optimal with respect to both the number of announcements and level of security achieved. Moreover, this is the first method obtaining weak @math -security that allows Alice to hold an arbitrary number of cards and Cathy to hold a set of @math cards. Alternatively, the construction yields solutions for arbitrary @math , @math and any @math .
|
Many useful results concerning parameter bounds and announcement sizes for weak 1-security, some of which we use in this paper, are given by @cite_24 . @cite_9 @cite_15 and Cord ' o n- @cite_5 discuss protocols for card deals of a particular form that achieve weak 1-security, using card sums modulo an appropriate parameter for announcements. @cite_16 is the only work of which we are aware that treats security notions stronger than weak 1-security, other than work by Swanson and Stinson @cite_25 and subsequent work by Cord ' o n- @cite_3 .
|
{
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_24",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_25"
],
"mid": [
"230082676",
"2010404638",
"165070137",
"1608583290",
"",
"1643493712",
"2141368734"
],
"abstract": [
"Given an interpreted system, we investigate ways for two agents to communicate secrets by public announcements. For card deals, the problem to keep all of your cards a secret (i) can be distinguished from the problem to keep some of your cards a secret (ii). For (i): we characterize a novel class of protocols consisting of two announcements, for the case where two agents both hold n cards and the third agent a single card; the communicating agents announce the sum of their cards modulo 2n + 1. For (ii): we show that the problem to keep at least one of your cards a secret is equivalent to the problem to keep your local state (hand of cards) a secret; we provide a large class of card deals for which exchange of secrets is possible; and we give an example for which there is no protocol of less than three announcements.",
"In the generalized Russian cards problem, the three players Alice, Bob and Cath draw @math a , b and @math c cards, respectively, from a deck of @math a + b + c cards. Players only know their own cards and what the deck of cards is. Alice and Bob are then required to communicate their hand of cards to each other by way of public messages. For a natural number @math k , the communication is said to be @math k -safe if Cath does not learn whether or not Alice holds any given set of at most @math k cards that are not Cath's, a notion originally introduced as weak @math k -security by Swanson and Stinson. An elegant solution by Atkinson views the cards as points in a finite projective plane. We propose a general solution in the spirit of Atkinson's, although based on finite vector spaces rather than projective planes, and call it the geometric protocol'. Given arbitrary @math c , k > 0 , this protocol gives an informative and @math k -safe solution to the generalized Russian cards problem for infinitely many values of @math ( a , b , c ) with @math b = O ( a c ) . This improves on the collection of parameters for which solutions are known. In particular, it is the first solution which guarantees @math k -safety when Cath has more than one card.",
"Two parties A and B select a cards and b cards from a known deck and a third party C receives the remaining c cards. We consider methods whereby A can, in a single message, publicly inform B of her hand without C learning any card held by A or by B. Conditions on a, b, c are given for the existence of an appropriate message.",
"Consider three players Alice, Bob and Cath who hold a, b and c cards, respectively, from a deck of d = a+b+c cards. The cards are all different and players only know their own cards. Suppose Alice and Bob wish to communicate their cards to each other without Cath learning whether Alice or Bob holds a specific card. Considering the cards as consecutive natural numbers 0, 1, . . . , we investigate general conditions for when Alice or Bob can safely announce the sum of the cards they hold modulo an appropriately chosen integer. We demonstrate that this holds whenever a, b > 2 and c = 1. Because Cath holds a single card, this also implies that Alice and Bob will learn the card deal from the other player’s announcement.",
"",
"Public key cryptography bases its security on mathematical problems that are computationally hard to solve. There are also cryptographic protocols wherein discovering the secret is not too complex given the current state of technology, but logically impossible. One approach involves the use of a random deck of cards for such protocols. The messages in the protocol consist of announcements made by the two players attempting to exchange a secret. In that approach one may require that these two players can communicate their holding of cards to each other without the remaining players (eavesdroppers) learning a single card in these holdings. In past work we described combinatorial axioms to achieve that. The eavesdroppers may still be able to make an educated guess about individual card occurrence (card ownership). In this work we focus on overcoming such bias. From the perspective of cryptography there are two ways to do that: either use protocols that produce announcements that are unbiased for card occurrence, or use protocols that ensure that there is no relation between patterns in the announcement, such as card occurrence, and the actual holding. We focus on the first. We devise an additional requirement for the announcement in order to eliminate the possibility of making educated guesses. To that effect we propose an additional combinatorial axiom CA4, and we give a method to design announcements that meet this requirement. Additionally, we present unbiased protocols for incidental cases.",
"We present the first formal mathematical presentation of the generalized Russian cards problem, and provide rigorous security definitions that capture both basic and extended versions of weak and perfect security notions. In the generalized Russian cards problem, three players, Alice, Bob, and Cathy, are dealt a deck of @math cards, each given @math , @math , and @math cards, respectively. The goal is for Alice and Bob to learn each other's hands via public communication, without Cathy learning the fate of any particular card. The basic idea is that Alice announces a set of possible hands she might hold, and Bob, using knowledge of his own hand, should be able to learn Alice's cards from this announcement, but Cathy should not. Using a combinatorial approach, we are able to give a nice characterization of informative strategies (i.e., strategies allowing Bob to learn Alice's hand), having optimal communication complexity, namely the set of possible hands Alice announces must be equivalent to a large set of @math -designs, where @math . We also provide some interesting necessary conditions for certain types of deals to be simultaneously informative and secure. That is, for deals satisfying @math for some @math , where @math and the strategy is assumed to satisfy a strong version of security (namely perfect @math -security), we show that @math and hence @math . We also give a precise characterization of informative and perfectly @math -secure deals of the form @math satisfying @math involving @math -designs."
]
}
|
1401.1526
|
1596450573
|
In the generalized Russian cards problem, we have a card deck @math of @math cards and three participants, Alice, Bob, and Cathy, dealt @math , @math , and @math cards, respectively. Once the cards are dealt, Alice and Bob wish to privately communicate their hands to each other via public announcements, without the advantage of a shared secret or public key infrastructure. Cathy should remain ignorant of all but her own cards after Alice and Bob have made their announcements. Notions for Cathy's ignorance in the literature range from Cathy not learning the fate of any individual card with certainty (weak @math -security) to not gaining any probabilistic advantage in guessing the fate of some set of @math cards (perfect @math -security). As we demonstrate, the generalized Russian cards problem has close ties to the field of combinatorial designs, on which we rely heavily, particularly for perfect security notions. Our main result establishes an equivalence between perfectly @math -secure strategies and @math -designs on @math points with block size @math , when announcements are chosen uniformly at random from the set of possible announcements. We also provide construction methods and example solutions, including a construction that yields perfect @math -security against Cathy when @math . We leverage a known combinatorial design to construct a strategy with @math , @math , and @math that is perfectly @math -secure. Finally, we consider a variant of the problem that yields solutions that are easy to construct and optimal with respect to both the number of announcements and level of security achieved. Moreover, this is the first method obtaining weak @math -security that allows Alice to hold an arbitrary number of cards and Cathy to hold a set of @math cards. Alternatively, the construction yields solutions for arbitrary @math , @math and any @math .
|
In addition, there has been recent work @cite_12 @cite_6 in which protocols consisting of more than one announcement by Alice and Bob are considered, which is a generalization of the problem which we consider here. Van Ditmarsch and Soler-Toscano @cite_12 show that no good announcement exists for card deals of the form @math using bounds from @cite_24 . The authors instead give an interactive protocol that requires at least three rounds of communication in order for Alice and Bob to learn each other's hands; their protocol uses combinatorial designs to determine the initial announcement by Alice and the protocol analysis is done using epistemic logic.
|
{
"cite_N": [
"@cite_24",
"@cite_6",
"@cite_12"
],
"mid": [
"165070137",
"2073060670",
""
],
"abstract": [
"Two parties A and B select a cards and b cards from a known deck and a third party C receives the remaining c cards. We consider methods whereby A can, in a single message, publicly inform B of her hand without C learning any card held by A or by B. Conditions on a, b, c are given for the existence of an appropriate message.",
"In the generalized Russian cards problem, Alice, Bob and Cath draw a, b and c cards, respectively, from a deck of size a+b+c. Alice and Bob must then communicate their entire hand to each other, without Cath learning the owner of a single card she does not hold. Unlike many traditional problems in cryptography, however, they are not allowed to encode or hide the messages they exchange from Cath. The problem is then to find methods through which they can achieve this. We propose a general four-step solution based on finite vector spaces, and call it the ''colouring protocol'', as it involves colourings of lines. Our main results show that the colouring protocol may be used to solve the generalized Russian cards problem in cases where a is a power of a prime, c=O(a^2) and b=O(c^2). This improves substantially on the set of parameters for which solutions are known to exist; in particular, it had not been shown previously that the problem could be solved in cases where the eavesdropper has more cards than one of the communicating players.",
""
]
}
|
1401.1526
|
1596450573
|
In the generalized Russian cards problem, we have a card deck @math of @math cards and three participants, Alice, Bob, and Cathy, dealt @math , @math , and @math cards, respectively. Once the cards are dealt, Alice and Bob wish to privately communicate their hands to each other via public announcements, without the advantage of a shared secret or public key infrastructure. Cathy should remain ignorant of all but her own cards after Alice and Bob have made their announcements. Notions for Cathy's ignorance in the literature range from Cathy not learning the fate of any individual card with certainty (weak @math -security) to not gaining any probabilistic advantage in guessing the fate of some set of @math cards (perfect @math -security). As we demonstrate, the generalized Russian cards problem has close ties to the field of combinatorial designs, on which we rely heavily, particularly for perfect security notions. Our main result establishes an equivalence between perfectly @math -secure strategies and @math -designs on @math points with block size @math , when announcements are chosen uniformly at random from the set of possible announcements. We also provide construction methods and example solutions, including a construction that yields perfect @math -security against Cathy when @math . We leverage a known combinatorial design to construct a strategy with @math , @math , and @math that is perfectly @math -secure. Finally, we consider a variant of the problem that yields solutions that are easy to construct and optimal with respect to both the number of announcements and level of security achieved. Moreover, this is the first method obtaining weak @math -security that allows Alice to hold an arbitrary number of cards and Cathy to hold a set of @math cards. Alternatively, the construction yields solutions for arbitrary @math , @math and any @math .
|
Cord ' o n- @cite_6 consider four-step solutions that achieve weak @math -security for the generalized Russian cards problem with parameters @math such that @math ; this is the first work that shows it is possible to achieve weak @math -security in cases where Cathy holds more cards than one of the other players. The authors demonstrate the existence of a necessary construction for Bob's announcement when the card deal parameters satisfy specific conditions and briefly address the feasibility of finding such constructions in practice. In particular, the authors leave as an interesting open problem efficient algorithms for producing Bob's announcement.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2073060670"
],
"abstract": [
"In the generalized Russian cards problem, Alice, Bob and Cath draw a, b and c cards, respectively, from a deck of size a+b+c. Alice and Bob must then communicate their entire hand to each other, without Cath learning the owner of a single card she does not hold. Unlike many traditional problems in cryptography, however, they are not allowed to encode or hide the messages they exchange from Cath. The problem is then to find methods through which they can achieve this. We propose a general four-step solution based on finite vector spaces, and call it the ''colouring protocol'', as it involves colourings of lines. Our main results show that the colouring protocol may be used to solve the generalized Russian cards problem in cases where a is a power of a prime, c=O(a^2) and b=O(c^2). This improves substantially on the set of parameters for which solutions are known to exist; in particular, it had not been shown previously that the problem could be solved in cases where the eavesdropper has more cards than one of the communicating players."
]
}
|
1401.1526
|
1596450573
|
In the generalized Russian cards problem, we have a card deck @math of @math cards and three participants, Alice, Bob, and Cathy, dealt @math , @math , and @math cards, respectively. Once the cards are dealt, Alice and Bob wish to privately communicate their hands to each other via public announcements, without the advantage of a shared secret or public key infrastructure. Cathy should remain ignorant of all but her own cards after Alice and Bob have made their announcements. Notions for Cathy's ignorance in the literature range from Cathy not learning the fate of any individual card with certainty (weak @math -security) to not gaining any probabilistic advantage in guessing the fate of some set of @math cards (perfect @math -security). As we demonstrate, the generalized Russian cards problem has close ties to the field of combinatorial designs, on which we rely heavily, particularly for perfect security notions. Our main result establishes an equivalence between perfectly @math -secure strategies and @math -designs on @math points with block size @math , when announcements are chosen uniformly at random from the set of possible announcements. We also provide construction methods and example solutions, including a construction that yields perfect @math -security against Cathy when @math . We leverage a known combinatorial design to construct a strategy with @math , @math , and @math that is perfectly @math -secure. Finally, we consider a variant of the problem that yields solutions that are easy to construct and optimal with respect to both the number of announcements and level of security achieved. Moreover, this is the first method obtaining weak @math -security that allows Alice to hold an arbitrary number of cards and Cathy to hold a set of @math cards. Alternatively, the construction yields solutions for arbitrary @math , @math and any @math .
|
In this paper, we build extensively on results by Swanson and Stinson @cite_25 . In particular, we greatly simplify the proofs for results connecting certain types of perfectly @math -secure deals and Steiner systems, originally shown in Swanson and Stinson @cite_25 . The construction technique using a starting design'', given in Theorem is a generalization of the technique given by Swanson and Stinson @cite_25 . This generalized construction technique allows us to answer in the affirmative the question on the existence of perfectly secure and informative strategies for deals in which Cathy holds more than one card.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2141368734"
],
"abstract": [
"We present the first formal mathematical presentation of the generalized Russian cards problem, and provide rigorous security definitions that capture both basic and extended versions of weak and perfect security notions. In the generalized Russian cards problem, three players, Alice, Bob, and Cathy, are dealt a deck of @math cards, each given @math , @math , and @math cards, respectively. The goal is for Alice and Bob to learn each other's hands via public communication, without Cathy learning the fate of any particular card. The basic idea is that Alice announces a set of possible hands she might hold, and Bob, using knowledge of his own hand, should be able to learn Alice's cards from this announcement, but Cathy should not. Using a combinatorial approach, we are able to give a nice characterization of informative strategies (i.e., strategies allowing Bob to learn Alice's hand), having optimal communication complexity, namely the set of possible hands Alice announces must be equivalent to a large set of @math -designs, where @math . We also provide some interesting necessary conditions for certain types of deals to be simultaneously informative and secure. That is, for deals satisfying @math for some @math , where @math and the strategy is assumed to satisfy a strong version of security (namely perfect @math -security), we show that @math and hence @math . We also give a precise characterization of informative and perfectly @math -secure deals of the form @math satisfying @math involving @math -designs."
]
}
|
1401.1526
|
1596450573
|
In the generalized Russian cards problem, we have a card deck @math of @math cards and three participants, Alice, Bob, and Cathy, dealt @math , @math , and @math cards, respectively. Once the cards are dealt, Alice and Bob wish to privately communicate their hands to each other via public announcements, without the advantage of a shared secret or public key infrastructure. Cathy should remain ignorant of all but her own cards after Alice and Bob have made their announcements. Notions for Cathy's ignorance in the literature range from Cathy not learning the fate of any individual card with certainty (weak @math -security) to not gaining any probabilistic advantage in guessing the fate of some set of @math cards (perfect @math -security). As we demonstrate, the generalized Russian cards problem has close ties to the field of combinatorial designs, on which we rely heavily, particularly for perfect security notions. Our main result establishes an equivalence between perfectly @math -secure strategies and @math -designs on @math points with block size @math , when announcements are chosen uniformly at random from the set of possible announcements. We also provide construction methods and example solutions, including a construction that yields perfect @math -security against Cathy when @math . We leverage a known combinatorial design to construct a strategy with @math , @math , and @math that is perfectly @math -secure. Finally, we consider a variant of the problem that yields solutions that are easy to construct and optimal with respect to both the number of announcements and level of security achieved. Moreover, this is the first method obtaining weak @math -security that allows Alice to hold an arbitrary number of cards and Cathy to hold a set of @math cards. Alternatively, the construction yields solutions for arbitrary @math , @math and any @math .
|
Cord ' o n- @cite_3 further elaborate on protocols of length two and the notion of weak @math -security. The authors present a geometric protocol, discussed in , based on hyperplanes that yields informative and weakly @math -secure equitable @math -strategies for appropriate parameters. In particular, this protocol allows Cathy to hold more than one card. In certain card deals, this protocol achieves perfect @math -security for @math equal to one or two. We remark that with the exception of , our results were completed independently of Cord ' o n- @cite_3 .
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2010404638"
],
"abstract": [
"In the generalized Russian cards problem, the three players Alice, Bob and Cath draw @math a , b and @math c cards, respectively, from a deck of @math a + b + c cards. Players only know their own cards and what the deck of cards is. Alice and Bob are then required to communicate their hand of cards to each other by way of public messages. For a natural number @math k , the communication is said to be @math k -safe if Cath does not learn whether or not Alice holds any given set of at most @math k cards that are not Cath's, a notion originally introduced as weak @math k -security by Swanson and Stinson. An elegant solution by Atkinson views the cards as points in a finite projective plane. We propose a general solution in the spirit of Atkinson's, although based on finite vector spaces rather than projective planes, and call it the geometric protocol'. Given arbitrary @math c , k > 0 , this protocol gives an informative and @math k -safe solution to the generalized Russian cards problem for infinitely many values of @math ( a , b , c ) with @math b = O ( a c ) . This improves on the collection of parameters for which solutions are known. In particular, it is the first solution which guarantees @math k -safety when Cath has more than one card."
]
}
|
1401.0997
|
2950728798
|
RPL, the routing protocol proposed by IETF for IPv6 6LoWPAN Low Power and Lossy Networks has significant complexity. Another protocol called LOADng, a lightweight variant of AODV, emerges as an alternative solution. In this paper, we compare the performance of the two protocols in a Home Automation scenario with heterogenous traffic patterns including a mix of multipoint-to-point and point-to-multipoint routes in realistic dense non-uniform network topologies. We use Contiki OS and Cooja simulator to evaluate the behavior of the ContikiRPL implementation and a basic non-optimized implementation of LOADng. Unlike previous studies, our results show that RPL provides shorter delays, less control overhead, and requires less memory than LOADng. Nevertheless, enhancing LOADng with more efficient flooding and a better route storage algorithm may improve its performance.
|
presented a comparative performance study of RPL and LOADng in case of bidirectional traffic using simulations in NS2 @cite_5 . The paper shows a significantly larger control overhead of RPL caused by the maintenance of downward routes. It also compares the two protocols with an ideal routing protocol to show that RPL provides near-optimal routes while LOADng results in a certain gap. As the RPL RFC does not specify the period or the mechanism to use for maintaining downward routes, the study assumed an interval of 15 seconds. This choice is questionable, because it is the main cause of the high control overhead of RPL. Furthermore, the application-layer scenario used for the comparison is not the same for the two protocols. Thus, the question remains how LOADng and RPL perform under the same scenario.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2020854838"
],
"abstract": [
"Routing protocols for sensor networks are often designed with explicit assumptions, serving to simplify design and reduce the necessary energy, processing and communications requirements. Different protocols make different assumptions - and this paper considers those made by the designers of RPL - an IPv6 routing protocol for such networks, developed within the IETF. Specific attention is given to the predominance of bi-directional traffic flows in a large class of sensor networks, and this paper therefore studies the performance of RPL for such flows. As a point of comparison, a different protocol, called LOAD, is also studied. LOAD is derived from AODV and supports more general kinds of traffic flows. The results of this investigation reveal that for scenarios where bi-directional traffic flows are predominant, LOAD provides similar data delivery ratios as RPL, while incurring less overhead and being simultaneously less constrained in the types of topologies supported."
]
}
|
1401.1183
|
2078337493
|
Several tensor eigenpair definitions have been put forth in the past decade, but these can all be unified under generalized tensor eigenpair framework, introduced by Chang, Pearson, and Zhang [J. Math. Anal. Appl., 350 (2009), pp. 416--422]. Given mth-order, n-dimensional real-valued symmetric tensors @math and @math , the goal is to find @math and @math such that @math . Different choices for @math yield different versions of the tensor eigenvalue problem. We present our generalized eigenproblem adaptive power (GEAP) method for solving the problem, which is an extension of the shifted symmetric higher-order power method (SS-HOPM) for finding Z-eigenpairs. A major drawback of SS-HOPM is that its performance depended on choosing an appropriate shift, but our GEAP method also includes an adaptive method for choosing the shift automatically.
|
Like its predecessor SS-HOPM @cite_11 , the GEAP method has the desirable qualities of guaranteed convergence and simple implementation. Additionally, the adaptive choice of @math in GEAP (as opposed to SS-HOPM) means that there are no parameters for the user to specify.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2070028074"
],
"abstract": [
"Recent work on eigenvalues and eigenvectors for tensors of order @math has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form @math subject to @math , which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs."
]
}
|
1401.1183
|
2078337493
|
Several tensor eigenpair definitions have been put forth in the past decade, but these can all be unified under generalized tensor eigenpair framework, introduced by Chang, Pearson, and Zhang [J. Math. Anal. Appl., 350 (2009), pp. 416--422]. Given mth-order, n-dimensional real-valued symmetric tensors @math and @math , the goal is to find @math and @math such that @math . Different choices for @math yield different versions of the tensor eigenvalue problem. We present our generalized eigenproblem adaptive power (GEAP) method for solving the problem, which is an extension of the shifted symmetric higher-order power method (SS-HOPM) for finding Z-eigenpairs. A major drawback of SS-HOPM is that its performance depended on choosing an appropriate shift, but our GEAP method also includes an adaptive method for choosing the shift automatically.
|
Han @cite_9 proposed an unconstrained variations principle for finding generalized eigenpairs. In the general case, the function to be optimized is The @math has the same meaning as for GEAP: choosing @math finds local maxima and @math finds local minima. For comparison, the final solution is rescaled as @math , and then we calculate @math (since @math ).
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2007036852"
],
"abstract": [
"Let @math be a positive integer and @math be a positive even integer. Let @math be an @math order @math -dimensional real weakly symmetric tensor and @math be a real weakly symmetric positive definite tensor of the same size. @math is called a @math -eigenvalue of @math if @math for some @math . In this paper, we introduce two unconstrained optimization problems and obtain some variational characterizations for the minimum and maximum @math --eigenvalues of @math . Our results extend Auchmuty's unconstrained variational principles for eigenvalues of real symmetric matrices. This unconstrained optimization approach can be used to find a Z-, H-, or D-eigenvalue of an even order weakly symmetric tensor. We provide some numerical results to illustrate the effectiveness of this approach for finding a Z-eigenvalue and for determining the positive semidefiniteness of an even order symmetric tensor."
]
}
|
1401.1183
|
2078337493
|
Several tensor eigenpair definitions have been put forth in the past decade, but these can all be unified under generalized tensor eigenpair framework, introduced by Chang, Pearson, and Zhang [J. Math. Anal. Appl., 350 (2009), pp. 416--422]. Given mth-order, n-dimensional real-valued symmetric tensors @math and @math , the goal is to find @math and @math such that @math . Different choices for @math yield different versions of the tensor eigenvalue problem. We present our generalized eigenproblem adaptive power (GEAP) method for solving the problem, which is an extension of the shifted symmetric higher-order power method (SS-HOPM) for finding Z-eigenpairs. A major drawback of SS-HOPM is that its performance depended on choosing an appropriate shift, but our GEAP method also includes an adaptive method for choosing the shift automatically.
|
In general, Han's method represents an alternative approach to solving the generalized tensor eigenpair problem. In @cite_9 , Han's method was compared to SS-HOPM with a fixed shift (for Z-eigenpairs only) and was superior. However, GEAP is usually as fast as Han's method and perhaps a little more robust in terms of its convergence behavior. The speed being similar is thanks to the adaptive shift in GEAP. It may be that Han's method could avoid problems of converging to incorrect solutions with tighter tolerances, but then the speed would be slower.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2007036852"
],
"abstract": [
"Let @math be a positive integer and @math be a positive even integer. Let @math be an @math order @math -dimensional real weakly symmetric tensor and @math be a real weakly symmetric positive definite tensor of the same size. @math is called a @math -eigenvalue of @math if @math for some @math . In this paper, we introduce two unconstrained optimization problems and obtain some variational characterizations for the minimum and maximum @math --eigenvalues of @math . Our results extend Auchmuty's unconstrained variational principles for eigenvalues of real symmetric matrices. This unconstrained optimization approach can be used to find a Z-, H-, or D-eigenvalue of an even order weakly symmetric tensor. We provide some numerical results to illustrate the effectiveness of this approach for finding a Z-eigenvalue and for determining the positive semidefiniteness of an even order symmetric tensor."
]
}
|
1401.1183
|
2078337493
|
Several tensor eigenpair definitions have been put forth in the past decade, but these can all be unified under generalized tensor eigenpair framework, introduced by Chang, Pearson, and Zhang [J. Math. Anal. Appl., 350 (2009), pp. 416--422]. Given mth-order, n-dimensional real-valued symmetric tensors @math and @math , the goal is to find @math and @math such that @math . Different choices for @math yield different versions of the tensor eigenvalue problem. We present our generalized eigenproblem adaptive power (GEAP) method for solving the problem, which is an extension of the shifted symmetric higher-order power method (SS-HOPM) for finding Z-eigenpairs. A major drawback of SS-HOPM is that its performance depended on choosing an appropriate shift, but our GEAP method also includes an adaptive method for choosing the shift automatically.
|
In terms of methods specifically geared to tensor eigenvalues, most work has focused on computing the largest H-eigenvalue for a tensor: @cite_8 @cite_4 . The method of Liu, Zhou, and Ibrahim @cite_4 is guaranteed to always find the largest eigenvalue and also uses a shift'' approach.
|
{
"cite_N": [
"@cite_4",
"@cite_8"
],
"mid": [
"2076404366",
"2024266080"
],
"abstract": [
"In this paper we propose an iterative method to calculate the largest eigenvalue of a nonnegative tensor. We prove this method converges for any irreducible nonnegative tensor. We also apply this method to study the positive definiteness of a multivariate form.",
"In this paper we propose an iterative method for calculating the largest eigenvalue of an irreducible nonnegative tensor. This method is an extension of a method of Collatz (1942) for calculating the spectral radius of an irreducible nonnegative matrix. Numerical results show that our proposed method is promising. We also apply the method to studying higher-order Markov chains."
]
}
|
1401.0818
|
2952913170
|
In this study, we consider the selective combining in hybrid cooperative networks (SCHCNs scheme) with one source node, one destination node and @math relay nodes. In the SCHCN scheme, each relay first adaptively chooses between amplify-and-forward protocol and decode-and-forward protocol on a per frame basis by examining the error-detecting code result, and @math ( @math ) relays will be selected to forward their received signals to the destination. We first develop a signal-to-noise ratio (SNR) threshold-based frame error rate (FER) approximation model. Then, the theoretical FER expressions for the SCHCN scheme are derived by utilizing the proposed SNR threshold-based FER approximation model. The analytical FER expressions are validated through simulation results.
|
In @cite_23 , the SNR threshold-based FER approximation model is applied to the iteratively decoded systems with turbo codes. It is shown that the optimal SNR threshold coincides with the convergence threshold of the iterative turbo decoder. Furthermore, the SNR threshold-based FER model is extended to non-iterative coded and uncoded systems in @cite_18 @cite_33 . The is adopted in @cite_18 @cite_33 to minimize the sum of absolute error where @math denotes the average SNR, and the SNR threshold is given as Since the does not consider the fact that the FER decreases more quickly at high SNR region in high diversity order systems, it needs to be improved for general diversity order systems.
|
{
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_23"
],
"mid": [
"2950594532",
"2077189847",
"2103374230"
],
"abstract": [
"It is known that the frame error rate of turbo codes on quasi-static fading channels can be accurately approximated using the convergence threshold of the corresponding iterative decoder. This paper considers quasi-static fading channels and demonstrates that non-iterative schemes can also be characterized by a similar threshold based on which their frame error rate can be readily estimated. In particular, we show that this threshold is a function of the probability of successful frame detection in additive white Gaussian noise, normalized by the squared instantaneous signal-to-noise ratio. We apply our approach to uncoded binary phase shift keying, convolutional coding and turbo coding and demonstrate that the approximated frame error rate is within 0.4 dB of the simulation results. Finally, we introduce performance evaluation plots to explore the impact of the frame size on the performance of the schemes under investigation.",
"Proper selection of a signal-to-noise ratio threshold largely determines the tightness of an approximation to the frame error rate of a system over a quasistatic fading channel. It is demonstrated that the expression for the optimal threshold value, which has been established for single-input single-output (SISO) channels, remains unchanged for the general case of multiple-input multiple-output (MIMO) channels.",
"We introduce a simple technique for analyzing the iterative decoder that is broadly applicable to different classes of codes defined over graphs in certain fading as well as additive white Gaussian noise (AWGN) channels. The technique is based on the observation that the extrinsic information from constituent maximum a posteriori (MAP) decoders is well approximated by Gaussian random variables when the inputs to the decoders are Gaussian. The independent Gaussian model implies the existence of an iterative decoder threshold that statistically characterizes the convergence of the iterative decoder. Specifically, the iterative decoder converges to zero probability of error as the number of iterations increases if and only if the channel E sub b N sub 0 exceeds the threshold. Despite the idealization of the model and the simplicity of the analysis technique, the predicted threshold values are in excellent agreement with the waterfall regions observed experimentally in the literature when the codeword lengths are large. Examples are given for parallel concatenated convolutional codes, serially concatenated convolutional codes, and the generalized low-density parity-check (LDPC) codes of Gallager and Cheng-McEliece (1996). Convergence-based design of asymmetric parallel concatenated convolutional codes (PCCC) is also discussed."
]
}
|
1401.1031
|
1954382668
|
Constraints have played an important role in the construction of GUIs, where they are mainly used to define the layout of the widgets. Resizing behavior is very important in GUIs because areas have domain specific parameters such as form the resizing of windows. If linear objective function is used and window is resized then error is not distributed equally. To distribute the error equally, a quadratic objective function is introduced. Different algorithms are widely used for solving linear constraints and quadratic problems in a variety of different scientific areas. The linear relxation, Kaczmarz, direct and linear programming methods are common methods for solving linear constraints for GUI layout. The interior point and active set methods are most commonly used techniques to solve quadratic programming problems. Current constraint solvers designed for GUI layout do not use interior point methods for solving a quadratic objective function subject to linear equality and inequality constraints. In this paper, performance aspects and the convergence speed of interior point and active set methods are compared along with one most commonly used linear programming method when they are implemented for graphical user interface layout. The performance and convergence of the proposed algorithms are evaluated empirically using randomly generated UI layout specifications of various sizes. The results show that the interior point algorithms perform significantly better than the Simplex method and QOCA-solver, which uses the active set method implementation for solving quadratic optimization.
|
Most of the research related to GUI layout deals with the various algorithms for solving constraint hierarchies. Research related to constraint based UI layout has provided results in the form of tools @cite_7 @cite_19 and algorithms @cite_10 @cite_5 for specific tasks. The latest work @cite_0 on constraint based GUIs uses a quadratic solving strategy which they find better than linear solving strategies. They @cite_0 implemented the active set method for solving a quadratic objective function subject to some linear constraints. Baraf @cite_14 presents a quadratic optimization algorithm for solving linear constraints in modelling physical systems. QOCA @cite_5 uses the active set algorithm for solving quadratic programming problem for graphical user interface layout.
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_10"
],
"mid": [
"2234650683",
"1993924106",
"",
"",
"2016321606",
"2073536284"
],
"abstract": [
"A new algorithm for computing contact forces between solid objects with friction is presented. The algorithm allows a mix of contact points with static and dynamic friction. In contrast to previous approaches, the problem of computing contact forces is not transformed into an optimization problem. Because of this, the need for sophisticated optimization software packages is eliminated. For both systems with and without friction, the algorithm has proven to be considerably faster, simple, and more reliable than previous approaches to the problem. In particular, implementation of the algorithm by nonspecialists in numerical programming is quite feasible.",
"We propose a scalable algorithm called HiRise2 for incrementally solving soft linear constraints over real domains. It is based on a framework for soft constraints, known as constraint hierarchies, to allow effective modeling of user interface applications by using hierarchical preferences for constraints. HiRise2 introduces LU decompositions to improve the scalability of an incremental simplex method. Using this algorithm, we implemented a constraint solver. We also show the results of experiments on the performance of the solver.",
"",
"",
"Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that onewindowbe to the left of another, requiring that a pane occupy the leftmost 1 3 of a window, or preferring that an object be contained within a rectangle if possible. Current constraint solvers designed for UI applications cannot efficiently handle simultaneous linear equations and inequalities. This is amajor limitation. We describe incremental algorithms based on the dual simplex and active set methods that can solve such systems of constraints efficiently.",
"Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that one window be to the left of another, requiring that a pane occupy the leftmost third of a window, or preferring that an object be contained within a rectangle if possible. Previous constraint solvers designed for user interface applications cannot handle simultaneous linear equations and inequalities efficiently. This is a major limitation, as such systems of constraints arise often in natural declarative specifications. We describe Cassowary---an incremental algorithm based on the dual simplex method, which can solve such systems of constraints efficiently. We have implemented the algorithm as part of a constraint-solving toolkit. We discuss the implementation of the toolkit, its application programming interface, and its performance."
]
}
|
1401.1031
|
1954382668
|
Constraints have played an important role in the construction of GUIs, where they are mainly used to define the layout of the widgets. Resizing behavior is very important in GUIs because areas have domain specific parameters such as form the resizing of windows. If linear objective function is used and window is resized then error is not distributed equally. To distribute the error equally, a quadratic objective function is introduced. Different algorithms are widely used for solving linear constraints and quadratic problems in a variety of different scientific areas. The linear relxation, Kaczmarz, direct and linear programming methods are common methods for solving linear constraints for GUI layout. The interior point and active set methods are most commonly used techniques to solve quadratic programming problems. Current constraint solvers designed for GUI layout do not use interior point methods for solving a quadratic objective function subject to linear equality and inequality constraints. In this paper, performance aspects and the convergence speed of interior point and active set methods are compared along with one most commonly used linear programming method when they are implemented for graphical user interface layout. The performance and convergence of the proposed algorithms are evaluated empirically using randomly generated UI layout specifications of various sizes. The results show that the interior point algorithms perform significantly better than the Simplex method and QOCA-solver, which uses the active set method implementation for solving quadratic optimization.
|
All constraint solvers for UI layout have to support over-constrained systems. There are two approaches: weighted constraints and constraint hierarchies. Weighted constraints are typically used with direct methods, while constraint hierarchies are used with linear programming. Examples of direct methods for soft constraints are HiRise and HiRise2 @cite_7 . Many UI layout solvers are based on linear programming and support soft constraints using slack variables in the objective function @cite_10 @cite_5 @cite_2 .
|
{
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_7",
"@cite_2"
],
"mid": [
"2016321606",
"2073536284",
"1993924106",
""
],
"abstract": [
"Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that onewindowbe to the left of another, requiring that a pane occupy the leftmost 1 3 of a window, or preferring that an object be contained within a rectangle if possible. Current constraint solvers designed for UI applications cannot efficiently handle simultaneous linear equations and inequalities. This is amajor limitation. We describe incremental algorithms based on the dual simplex and active set methods that can solve such systems of constraints efficiently.",
"Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that one window be to the left of another, requiring that a pane occupy the leftmost third of a window, or preferring that an object be contained within a rectangle if possible. Previous constraint solvers designed for user interface applications cannot handle simultaneous linear equations and inequalities efficiently. This is a major limitation, as such systems of constraints arise often in natural declarative specifications. We describe Cassowary---an incremental algorithm based on the dual simplex method, which can solve such systems of constraints efficiently. We have implemented the algorithm as part of a constraint-solving toolkit. We discuss the implementation of the toolkit, its application programming interface, and its performance.",
"We propose a scalable algorithm called HiRise2 for incrementally solving soft linear constraints over real domains. It is based on a framework for soft constraints, known as constraint hierarchies, to allow effective modeling of user interface applications by using hierarchical preferences for constraints. HiRise2 introduces LU decompositions to improve the scalability of an incremental simplex method. Using this algorithm, we implemented a constraint solver. We also show the results of experiments on the performance of the solver.",
""
]
}
|
1401.1031
|
1954382668
|
Constraints have played an important role in the construction of GUIs, where they are mainly used to define the layout of the widgets. Resizing behavior is very important in GUIs because areas have domain specific parameters such as form the resizing of windows. If linear objective function is used and window is resized then error is not distributed equally. To distribute the error equally, a quadratic objective function is introduced. Different algorithms are widely used for solving linear constraints and quadratic problems in a variety of different scientific areas. The linear relxation, Kaczmarz, direct and linear programming methods are common methods for solving linear constraints for GUI layout. The interior point and active set methods are most commonly used techniques to solve quadratic programming problems. Current constraint solvers designed for GUI layout do not use interior point methods for solving a quadratic objective function subject to linear equality and inequality constraints. In this paper, performance aspects and the convergence speed of interior point and active set methods are compared along with one most commonly used linear programming method when they are implemented for graphical user interface layout. The performance and convergence of the proposed algorithms are evaluated empirically using randomly generated UI layout specifications of various sizes. The results show that the interior point algorithms perform significantly better than the Simplex method and QOCA-solver, which uses the active set method implementation for solving quadratic optimization.
|
Many different local propagation algorithms have been proposed for solving constraint hierarchies in UI layout. The DeltaBlue @cite_3 , SkyBlue @cite_12 and Detail @cite_16 algorithms are examples in this category. The DeltaBlue and SkyBlue algorithms cannot handle simultaneous constraints that depend on each other. However, the Detail algorithm can solve constraints simultaneously based on local propagation. All of the methods to handle soft constraints utilized in these solvers are designed to work with direct methods, so they inherit the problems direct methods usually have with sparse matrices.
|
{
"cite_N": [
"@cite_16",
"@cite_12",
"@cite_3"
],
"mid": [
"",
"1992337747",
"2053637323"
],
"abstract": [
"",
"Many user interface toolkits use constraint solvers to maintain geometric relationships between graphic objects, or to connect the graphics to the application data structures. One efficient and flexible technique for maintaining constraints is multi-way local propagation, where constraints are represented by sets of method procedures. To satisfy a set of constraints, a local propagation solver executes one method from each constraint. SkyBlue is an incremental constraint solver that uses local propagation to maintain a set of constraints as individual constraints are added and removed. If all of the constraints cannot be satisfied, SkyBlue leaves weaker constraints unsatisfied in order to satisfy stronger constraints (maintaining a constraint hierarchy). SkyBlue is a more general successor to the DeltaBlue algorithm that satisfies cycles of methods by calling external cycle solvers and supports multi-output methods. These features make SkyBlue more useful for constructing user interfaces, since cycles of constraints can occur frequently in user interface applications and multi-output methods are necessary to represent some useful constraints. This paper discusses some of applications that use SkyBlue, presents times for some user interface benchmarks and describes the SkyBlue algorithm in detail.",
"An incremental constraint solver, the DeltaBlue algorithm maintains an evolving solution to the constraint hierarchy as constraints are added and removed. DeltaBlue minimizes the cost of finding a new solution after each change by exploiting its knowledge of the last solution."
]
}
|
1401.0767
|
1542161432
|
Ensemble methods such as boosting combine multiple learners to obtain better prediction than could be obtained from any individual learner. Here we propose a principled framework for directly constructing ensemble learning methods from kernel methods. Unlike previous studies showing the equivalence between boosting and support vector machines (SVMs), which needs a translation procedure, we show that it is possible to design boosting-like procedure to solve the SVM optimization problems. In other words, it is possible to design ensemble methods directly from SVM without any middle procedure. This finding not only enables us to design new ensemble learning methods directly from kernel methods, but also makes it possible to take advantage of those highly-optimized fast linear SVM solvers for ensemble learning. We exemplify this framework for designing binary ensemble learning as well as a new multi-class ensemble learning methods. Experimental results demonstrate the flexibility and usefulness of the proposed framework.
|
The general connection between SVM and boosting has been discussed by a few researchers @cite_1 @cite_2 at a high level. To our knowledge, the work here is the first one that attempts to build ensemble models by solving SVM's optimization problem. We review some closest work next. Boosting has been extensively studied in the past decade @cite_21 @cite_1 @cite_6 @cite_24 @cite_17 . Our methods are close to @cite_24 @cite_17 in that we also use column generation (CG) to select weak learners and fully-correctively update weak learners' coefficients. Because we are solving the SVM problem, instead of the @math regularized boosting problem, conventional CG cannot be directly applied. We use CG in a novel way---instead of looking at dual constraints, we rely on the KKT conditions.
|
{
"cite_N": [
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_24",
"@cite_2",
"@cite_17"
],
"mid": [
"1988790447",
"1975846642",
"2103000819",
"1578080815",
"2143386126",
"2125607229"
],
"abstract": [
"In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.",
"One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.",
"Boosting combines a set of moderately accurate weak classifiers to form a highly accurate predictor. Compared with binary boosting classification, multi-class boosting received less attention. We propose a novel multi-class boosting formulation here. Unlike most previous multi-class boosting algorithms which decompose a multi-boost problem into multiple independent binary boosting problems, we formulate a direct optimization method for training multi-class boosting. Moreover, by explicitly deriving the La-grange dual of the formulated primal optimization problem, we design totally-corrective boosting using the column generation technique in convex optimization. At each iteration, all weak classifiers' weights are updated. Our experiments on various data sets demonstrate that our direct multi-class boosting achieves competitive test accuracy compared with state-of-the-art multi-class boosting in the literature.",
"We examine linear program (LP) approaches to boosting and demonstrate their efficient solution using LPBoost, a column generation based simplex method. We formulate the problem as if all possible weak hypotheses had already been generated. The labels produced by the weak hypotheses become the new feature space of the problem. The boosting task becomes to construct a learning function in the label space that minimizes misclassification error and maximizes the soft margin. We prove that for classification, minimizing the 1-norm soft margin error function directly optimizes a generalization error bound. The equivalent linear program can be efficiently solved using column generation techniques developed for large-scale optimization problems. The resulting LPBoost algorithm can be used to solve any LP boosting formulation by iteratively optimizing the dual misclassification costs in a restricted LP and dynamically generating weak hypotheses to make new LP columns. We provide algorithms for soft margin classification, confidence-rated, and regression boosting problems. Unlike gradient boosting algorithms, which may converge in the limit only, LPBoost converges in a finite number of iterations to a global solution satisfying mathematically well-defined optimality conditions. The optimal solutions of LPBoost are very sparse in contrast with gradient based methods. Computationally, LPBoost is competitive in quality and computational cost to AdaBoost.",
"We show via an equivalence of mathematical programs that a support vector (SV) algorithm can be translated into an equivalent boosting-like algorithm and vice versa. We exemplify this translation procedure for a new algorithm: one-class leveraging, starting from the one-class support vector machine (1-SVM). This is a first step toward unsupervised learning in a boosting framework. Building on so-called barrier methods known from the theory of constrained optimization, it returns a function, written as a convex combination of base hypotheses, that characterizes whether a given test point is likely to have been generated from the distribution underlying the training data. Simulations on one-class classification problems demonstrate the usefulness of our approach.",
"We study boosting algorithms from a new perspective. We show that the Lagrange dual problems of l1-norm-regularized AdaBoost, LogitBoost, and soft-margin LPBoost with generalized hinge loss are all entropy maximization problems. By looking at the dual problems of these boosting algorithms, we show that the success of boosting algorithms can be understood in terms of maintaining a better margin distribution by maximizing margins and at the same time controlling the margin variance. We also theoretically prove that approximately, l1-norm-regularized AdaBoost maximizes the average margin, instead of the minimum margin. The duality formulation also enables us to develop column-generation-based optimization algorithms, which are totally corrective. We show that they exhibit almost identical classification results to that of standard stagewise additive boosting algorithms but with much faster convergence rates. Therefore, fewer weak classifiers are needed to build the ensemble using our proposed optimization technique."
]
}
|
1401.0767
|
1542161432
|
Ensemble methods such as boosting combine multiple learners to obtain better prediction than could be obtained from any individual learner. Here we propose a principled framework for directly constructing ensemble learning methods from kernel methods. Unlike previous studies showing the equivalence between boosting and support vector machines (SVMs), which needs a translation procedure, we show that it is possible to design boosting-like procedure to solve the SVM optimization problems. In other words, it is possible to design ensemble methods directly from SVM without any middle procedure. This finding not only enables us to design new ensemble learning methods directly from kernel methods, but also makes it possible to take advantage of those highly-optimized fast linear SVM solvers for ensemble learning. We exemplify this framework for designing binary ensemble learning as well as a new multi-class ensemble learning methods. Experimental results demonstrate the flexibility and usefulness of the proposed framework.
|
If one uses an infinitely many weak learners in boosting @cite_15 (or hidden units in neural networks @cite_16 ), the model is equivalent to SVM with a certain kernel. In particular, it shows that when the feature mapping function @math contains infinitely many randomly distributed decision stumps, the kernel function @math is the stump kernel of the form @math . Here @math is a constant, which has no impact on the SVM training. Moreover, when @math , , a perceptron, the corresponding kernel is called the perceptron kernel @math @math .
|
{
"cite_N": [
"@cite_15",
"@cite_16"
],
"mid": [
"2110612061",
"1493832124"
],
"abstract": [
"Ensemble learning algorithms such as boosting can achieve better performance by averaging over the predictions of some base hypotheses. Nevertheless, most existing algorithms are limited to combining only a finite number of hypotheses, and the generated ensemble is usually sparse. Thus, it is not clear whether we should construct an ensemble classifier with a larger or even an infinite number of hypotheses. In addition, constructing an infinite ensemble itself is a challenging task. In this paper, we formulate an infinite ensemble learning framework based on the support vector machine (SVM). The framework can output an infinite and nonsparse ensemble through embedding infinitely many hypotheses into an SVM kernel. We use the framework to derive two novel kernels, the stump kernel and the perceptron kernel. The stump kernel embodies infinitely many decision stumps, and the perceptron kernel embodies infinitely many perceptrons. We also show that the Laplacian radial basis function kernel embodies infinitely many decision trees, and can thus be explained through infinite ensemble learning. Experimental results show that SVM with these kernels is superior to boosting with the same base hypothesis set. In addition, SVM with the stump kernel or the perceptron kernel performs similarly to SVM with the Gaussian radial basis function kernel, but enjoys the benefit of faster parameter selection. These properties make the novel kernels favorable choices in practice.",
"This article extends neural networks to the case of an uncountable number of hidden units, in several ways. In the first approach proposed, a finite parametrization is possible, allowing gradient-based learning. While having the same number of parameters as an ordinary neural network, its internal structure suggests that it can represent some smooth functions much more compactly. Under mild assumptions, we also find better error bounds than with ordinary neural networks. Furthermore, this parametrization may help reducing the problem of saturation of the neurons. In a second approach, the input-to-hidden weights are fully nonparametric, yielding a kernel machine for which we demonstrate a simple kernel formula. Interestingly, the resulting kernel machine can be made hyperparameter-free and still generalizes in spite of an absence of explicit regularization."
]
}
|
1401.0767
|
1542161432
|
Ensemble methods such as boosting combine multiple learners to obtain better prediction than could be obtained from any individual learner. Here we propose a principled framework for directly constructing ensemble learning methods from kernel methods. Unlike previous studies showing the equivalence between boosting and support vector machines (SVMs), which needs a translation procedure, we show that it is possible to design boosting-like procedure to solve the SVM optimization problems. In other words, it is possible to design ensemble methods directly from SVM without any middle procedure. This finding not only enables us to design new ensemble learning methods directly from kernel methods, but also makes it possible to take advantage of those highly-optimized fast linear SVM solvers for ensemble learning. We exemplify this framework for designing binary ensemble learning as well as a new multi-class ensemble learning methods. Experimental results demonstrate the flexibility and usefulness of the proposed framework.
|
Loosely speaking, boosting can be seen as explicitly computing the kernel mapping functions because, as pointed out in @cite_2 , a kernel constructed by the inner product of weak learners' outputs satisfies the Mercer's condition. Random Fourier features (RFF) @cite_9 have been applied to large-scale kernel methods. RFF is designed by using the fact that a shift-invariant kernel is the Fourier transform of a non-negative measure. show that RFF does not perform well due to its data-independent sampling strategy when there is a large gap in the eigen-spectrum of the kernel matrix @cite_0 . In @cite_14 @cite_11 , it shows that for homogeneous additive kernels, the kernel mapping function can be exactly computed. When RFF is used as weak learners in the proposed framework here, the greedy CG based RFF selection can be viewed as data-dependent feature selection. Indeed, our experiments demonstrate that our method performs much better than random sampling.
|
{
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_0",
"@cite_2",
"@cite_11"
],
"mid": [
"2134778014",
"2144902422",
"2107791152",
"2143386126",
"2109235804"
],
"abstract": [
"We show that a class of nonlinear kernel SVMs admits approximate classifiers with runtime and memory complexity that is independent of the number of support vectors. This class of kernels, which we refer to as additive kernels, includes widely used kernels for histogram-based image comparison like intersection and chi-squared kernels. Additive kernel SVMs can offer significant improvements in accuracy over linear SVMs on a wide variety of tasks while having the same runtime, making them practical for large-scale recognition or real-time detection tasks. We present experiments on a variety of datasets, including the INRIA person, Daimler-Chrysler pedestrians, UIUC Cars, Caltech-101, MNIST, and USPS digits, to demonstrate the effectiveness of our method for efficient evaluation of SVMs with additive kernels. Since its introduction, our method has become integral to various state-of-the-art systems for PASCAL VOC object detection image classification, ImageNet Challenge, TRECVID, etc. The techniques we propose can also be applied to settings where evaluation of weighted additive kernels is required, which include kernelized versions of PCA, LDA, regression, k-means, as well as speeding up the inner loop of SVM classifier training algorithms.",
"To accelerate the training of kernel machines, we propose to map the input data to a randomized low-dimensional feature space and then apply existing fast linear methods. The features are designed so that the inner products of the transformed data are approximately equal to those in the feature space of a user specified shift-invariant kernel. We explore two sets of random features, provide convergence bounds on their ability to approximate various radial basis kernels, and show that in large-scale classification and regression tasks linear machine learning algorithms applied to these features outperform state-of-the-art large-scale kernel machines.",
"Both random Fourier features and the Nystrom method have been successfully applied to efficient kernel learning. In this work, we investigate the fundamental difference between these two approaches, and how the difference could affect their generalization performances. Unlike approaches based on random Fourier features where the basis functions (i.e., cosine and sine functions) are sampled from a distribution independent from the training data, basis functions used by the Nystrom method are randomly sampled from the training examples and are therefore data dependent. By exploring this difference, we show that when there is a large gap in the eigen-spectrum of the kernel matrix, approaches based on the Nystrom method can yield impressively better generalization error bound than random Fourier features based approach. We empirically verify our theoretical findings on a wide range of large data sets.",
"We show via an equivalence of mathematical programs that a support vector (SV) algorithm can be translated into an equivalent boosting-like algorithm and vice versa. We exemplify this translation procedure for a new algorithm: one-class leveraging, starting from the one-class support vector machine (1-SVM). This is a first step toward unsupervised learning in a boosting framework. Building on so-called barrier methods known from the theory of constrained optimization, it returns a function, written as a convex combination of base hypotheses, that characterizes whether a given test point is likely to have been generated from the distribution underlying the training data. Simulations on one-class classification problems demonstrate the usefulness of our approach.",
"Large scale nonlinear support vector machines (SVMs) can be approximated by linear ones using a suitable feature map. The linear SVMs are in general much faster to learn and evaluate (test) than the original nonlinear SVMs. This work introduces explicit feature maps for the additive class of kernels, such as the intersection, Hellinger's, and χ2 kernels, commonly used in computer vision, and enables their use in large scale problems. In particular, we: 1) provide explicit feature maps for all additive homogeneous kernels along with closed form expression for all common kernels; 2) derive corresponding approximate finite-dimensional feature maps based on a spectral analysis; and 3) quantify the error of the approximation, showing that the error is independent of the data dimension and decays exponentially fast with the approximation order for selected kernels such as χ2. We demonstrate that the approximations have indistinguishable performance from the full kernels yet greatly reduce the train test times of SVMs. We also compare with two other approximation methods: Nystrom's approximation of [1], which is data dependent, and the explicit map of Maji and Berg [2] for the intersection kernel, which, as in the case of our approximations, is data independent. The approximations are evaluated on a number of standard data sets, including Caltech-101 [3], Daimler-Chrysler pedestrians [4], and INRIA pedestrians [5]."
]
}
|
1401.0887
|
2160660350
|
In sparse signal representation, the choice of a dictionary often involves a tradeoff between two desirable properties – the ability to adapt to specific signal data and a fast implementation of the dictionary. To sparsely represent signals residing on weighted graphs, an additional design challenge is to incorporate the intrinsic geometric structure of the irregular data domain into the atoms of the dictionary. In this work, we propose a parametric dictionary learning algorithm to design data-adapted, structured dictionaries that sparsely represent graph signals. In particular, we model graph signals as combinations of overlapping local patterns. We impose the constraint that each dictionary is a concatenation of subdictionaries, with each subdictionary being a polynomial of the graph Laplacian matrix, representing a single pattern translated to different areas of the graph. The learning algorithm adapts the patterns to a training set of graph signals. Experimental results on both synthetic and real datasets demonstrate that the dictionaries learned by the proposed algorithm are competitive with and often better than unstructured dictionaries learned by state-of-the-art numerical learning algorithms in terms of sparse approximation of graph signals. In contrast to the unstructured dictionaries, however, the dictionaries learned by the proposed algorithm feature localized atoms and can be implemented in a computationally efficient manner in signal processing tasks such as compression, denoising, and classification.
|
The design of overcomplete dictionaries to sparsely represent signals has been extensively investigated in the past few years. We restrict our focus here to the literature related to the problem of designing dictionaries for graph signals. Generic numerical approaches such as K-SVD @cite_3 and MOD @cite_32 can certainly be applied to graph signals, with signals viewed as vectors in @math . However, the learned dictionaries will neither feature a fast implementation, nor explicitly incorporate the underlying graph structure.
|
{
"cite_N": [
"@cite_32",
"@cite_3"
],
"mid": [
"2115429828",
"2160547390"
],
"abstract": [
"A frame design technique for use with vector selection algorithms, for example matching pursuits (MP), is presented. The design algorithm is iterative and requires a training set of signal vectors. The algorithm, called method of optimal directions (MOD), is an improvement of the algorithm presented by Engan, Aase and Husoy see (Proc. ICASSP '98, Seattle, USA, p.1817-20, 1998). The MOD is applied to speech and electrocardiogram (ECG) signals, and the designed frames are tested on signals outside the training sets. Experiments demonstrate that the approximation capabilities, in terms of mean squared error (MSE), of the optimized frames are significantly better than those obtained using frames designed by the algorithm of Engan et. al. Experiments show typical reduction in MSE by 20-50 .",
"In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data"
]
}
|
1401.0887
|
2160660350
|
In sparse signal representation, the choice of a dictionary often involves a tradeoff between two desirable properties – the ability to adapt to specific signal data and a fast implementation of the dictionary. To sparsely represent signals residing on weighted graphs, an additional design challenge is to incorporate the intrinsic geometric structure of the irregular data domain into the atoms of the dictionary. In this work, we propose a parametric dictionary learning algorithm to design data-adapted, structured dictionaries that sparsely represent graph signals. In particular, we model graph signals as combinations of overlapping local patterns. We impose the constraint that each dictionary is a concatenation of subdictionaries, with each subdictionary being a polynomial of the graph Laplacian matrix, representing a single pattern translated to different areas of the graph. The learning algorithm adapts the patterns to a training set of graph signals. Experimental results on both synthetic and real datasets demonstrate that the dictionaries learned by the proposed algorithm are competitive with and often better than unstructured dictionaries learned by state-of-the-art numerical learning algorithms in terms of sparse approximation of graph signals. In contrast to the unstructured dictionaries, however, the dictionaries learned by the proposed algorithm feature localized atoms and can be implemented in a computationally efficient manner in signal processing tasks such as compression, denoising, and classification.
|
Meanwhile, several transform-based dictionaries for graph signals have recently been proposed (see @cite_18 for an overview and complete list of references). For example, the graph Fourier transform has been shown to sparsely represent smooth graph signals @cite_7 ; wavelet transforms such as diffusion wavelets @cite_11 , spectral graph wavelets @cite_9 , and critically sampled two-channel wavelet filter banks @cite_4 target piecewise-smooth graph signals; and vertex-frequency frames @cite_5 @cite_33 @cite_19 can be used to analyze signal content at specific vertex and frequency locations. These dictionaries feature pre-defined structures derived from the graph and some of them can be efficiently implemented; however, they generally are not adapted to the signals at hand. Two exceptions are the diffusion wavelet packets of @cite_25 and the wavelets on graphs via deep learning @cite_8 , which feature extra adaptivity.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_19",
"@cite_5",
"@cite_25",
"@cite_11"
],
"mid": [
"2101491865",
"2004559848",
"2016423476",
"2103351706",
"2121244641",
"2158787690",
"2117569556",
"1988856158",
"2107671127",
""
],
"abstract": [
"In applications such as social, energy, transportation, sensor, and neuronal networks, high-dimensional data naturally reside on the vertices of weighted graphs. The emerging field of signal processing on graphs merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process such signals on graphs. In this tutorial overview, we outline the main challenges of the area, discuss different ways to define graph spectral domains, which are the analogs to the classical frequency domain, and highlight the importance of incorporating the irregular structures of graph data domains when processing signals on graphs. We then review methods to generalize fundamental operations such as filtering, translation, modulation, dilation, and downsampling to the graph setting and survey the localized, multiscale transforms that have been proposed to efficiently extract information from high-dimensional data on graphs. We conclude with a brief discussion of open issues and possible extensions.",
"In this work, we propose the construction of two-channel wavelet filter banks for analyzing functions defined on the vertices of any arbitrary finite weighted undirected graph. These graph based functions are referred to as graph-signals as we build a framework in which many concepts from the classical signal processing domain, such as Fourier decomposition, signal filtering and downsampling can be extended to graph domain. Especially, we observe a spectral folding phenomenon in bipartite graphs which occurs during downsampling of these graphs and produces aliasing in graph signals. This property of bipartite graphs, allows us to design critically sampled two-channel filter banks, and we propose quadrature mirror filters (referred to as graph-QMF) for bipartite graph which cancel aliasing and lead to perfect reconstruction. For arbitrary graphs we present a bipartite subgraph decomposition which produces an edge-disjoint collection of bipartite subgraphs. Graph-QMFs are then constructed on each bipartite subgraph leading to “multi-dimensional” separable wavelet filter banks on graphs. Our proposed filter banks are critically sampled and we state necessary and sufficient conditions for orthogonality, aliasing cancellation and perfect reconstruction. The filter banks are realized by Chebychev polynomial approximations.",
"One of the key challenges in the area of signal processing on graphs is to design dictionaries and transform methods to identify and exploit structure in signals on weighted graphs. To do so, we need to account for the intrinsic geometric structure of the underlying graph data domain. In this paper, we generalize one of the most important signal processing tools - windowed Fourier analysis - to the graph setting. Our approach is to rst dene",
"In this paper, we introduce the concept of smoothness for signals supported on the vertices of a graph. We provide theoretical explanations when and why the Laplacian eigenbasis can be regarded as a meaningful “Fourier” transform of such signals. Moreover, we analyze the desired properties of the underlying graphs for better compressibility of the signals. We verify our theoretical work by experiments on real world data.",
"An increasing number of applications require processing of signals defined on weighted graphs. While wavelets provide a flexible tool for signal processing in the classical setting of regular domains, the existing graph wavelet constructions are less flexible - they are guided solely by the structure of the underlying graph and do not take directly into consideration the particular class of signals to be processed. This paper introduces a machine learning framework for constructing graph wavelets that can sparsely represent a given class of signals. Our construction uses the lifting scheme, and is based on the observation that the recurrent nature of the lifting scheme gives rise to a structure resembling a deep auto-encoder network. Particular properties that the resulting wavelets must satisfy determine the training objective and the structure of the involved neural networks. The training is unsupervised, and is conducted similarly to the greedy pre-training of a stack of auto-encoders. After training is completed, we obtain a linear wavelet transform that can be applied to any graph signal in time and memory linear in the size of the graph. Improved sparsity of our wavelet transform for the test signals is confirmed via experiments both on synthetic and real data.",
"We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian L. Given a wavelet generating kernel g and a scale parameter t, we define the scaled wavelet operator Ttg = g(tL). The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on g, this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing L. We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains.",
"We consider the problem of designing spectral graph filters for the construction of dictionaries of atoms that can be used to efficiently represent signals residing on weighted graphs. While the filters used in previous spectral graph wavelet constructions are only adapted to the length of the spectrum, the filters proposed in this paper are adapted to the distribution of graph Laplacian eigenvalues, and therefore lead to atoms with better discriminatory power. Our approach is to first characterize a family of systems of uniformly translated kernels in the graph spectral domain that give rise to tight frames of atoms generated via generalized translation on the graph. We then warp the uniform translates with a function that approximates the cumulative spectral density function of the graph Laplacian eigenvalues. We use this approach to construct computationally efficient, spectrum-adapted, tight vertex-frequency and graph wavelet frames. We give numerous examples of the resulting spectrum-adapted graph filters, and also present an illustrative example of vertex-frequency analysis using the proposed construction.",
"The prevalence of signals on weighted graphs is increasing; however, because of the irregular structure of weighted graphs, classical signal processing techniques cannot be directly applied to signals on graphs. In this paper, we define generalized translation and modulation operators for signals on graphs, and use these operators to adapt the classical windowed Fourier transform to the graph setting, enabling vertex-frequency analysis. When we apply this transform to a signal with frequency components that vary along a path graph, the resulting spectrogram matches our intuition from classical discrete-time signal processing. Yet, our construction is fully generalized and can be applied to analyze signals on any undirected, connected, weighted graph.",
"Abstract Diffusion wavelets can be constructed on manifolds, graphs and allow an efficient multiscale representation of powers of the diffusion operator that generates them. In many applications it is necessary to have time–frequency bases that are more versatile than wavelets, for example for the analysis, denoising and compression of a signal. In the Euclidean setting, wavelet packets have been very successful in many applications, ranging from image denoising, 2- and 3-dimensional compression of data (e.g., images, seismic data, hyper-spectral data) and in discrimination tasks as well. Till now these tools for signal processing have been available mainly in Euclidean settings and in low dimensions. Building upon the recent construction of diffusion wavelets, we show how to construct diffusion wavelet packets, generalizing the classical construction of wavelet packets, and allowing the same algorithms existing in the Euclidean setting to be lifted to rather general geometric and anisotropic settings, in higher dimension, on manifolds, graphs and even more general spaces. We show that efficient algorithms exists for computations of diffusion wavelet packets and discuss some applications and examples.",
""
]
}
|
1401.0887
|
2160660350
|
In sparse signal representation, the choice of a dictionary often involves a tradeoff between two desirable properties – the ability to adapt to specific signal data and a fast implementation of the dictionary. To sparsely represent signals residing on weighted graphs, an additional design challenge is to incorporate the intrinsic geometric structure of the irregular data domain into the atoms of the dictionary. In this work, we propose a parametric dictionary learning algorithm to design data-adapted, structured dictionaries that sparsely represent graph signals. In particular, we model graph signals as combinations of overlapping local patterns. We impose the constraint that each dictionary is a concatenation of subdictionaries, with each subdictionary being a polynomial of the graph Laplacian matrix, representing a single pattern translated to different areas of the graph. The learning algorithm adapts the patterns to a training set of graph signals. Experimental results on both synthetic and real datasets demonstrate that the dictionaries learned by the proposed algorithm are competitive with and often better than unstructured dictionaries learned by state-of-the-art numerical learning algorithms in terms of sparse approximation of graph signals. In contrast to the unstructured dictionaries, however, the dictionaries learned by the proposed algorithm feature localized atoms and can be implemented in a computationally efficient manner in signal processing tasks such as compression, denoising, and classification.
|
The recent work in @cite_12 tries to bridge the gap between the graph-based transform methods and the purely numerical dictionary learning algorithms by proposing an algorithm to learn structured graph dictionaries. The learned dictionaries have a structure that is derived from the graph topology, while its parameters are learned from the data. This work is the closest to ours in a sense that both graph dictionaries consist of subdictionaries that are based on the graph Laplacian. However, it does not necessarily lead to efficient implementations as the obtained dictionary is not necessarily a smooth matrix function (see, e.g., @cite_34 for more on matrix functions) of the graph Laplacian matrix.
|
{
"cite_N": [
"@cite_34",
"@cite_12"
],
"mid": [
"1718360928",
"2124543570"
],
"abstract": [
"Matrix functions are used in many areas of linear algebra and arise in numerous applications in science and engineering. The most common matrix function is the matrix inverse; it is not treated specifically in this chapter, but is covered in Section 1.5 and Section 51.3. This chapter is concerned with general matrix functions as well as specific cases such as matrix square roots, trigonometric functions, and the exponential and logarithmic functions.",
"We propose a method for learning dictionaries towards sparse approximation of signals defined on vertices of arbitrary graphs. Dictionaries are expected to describe effectively the main spatial and spectral components of the signals of interest, so that their structure is dependent on the graph information and its spectral representation. We first show how operators can be defined for capturing different spectral components of signals on graphs. We then propose a dictionary learning algorithm built on a sparse approximation step and a dictionary update function, which iteratively leads to adapting the structured dictionary to the class of target signals. Experimental results on synthetic and natural signals on graphs demonstrate the efficiency of the proposed algorithm both in terms of sparse approximation and support recovery performance."
]
}
|
1401.0887
|
2160660350
|
In sparse signal representation, the choice of a dictionary often involves a tradeoff between two desirable properties – the ability to adapt to specific signal data and a fast implementation of the dictionary. To sparsely represent signals residing on weighted graphs, an additional design challenge is to incorporate the intrinsic geometric structure of the irregular data domain into the atoms of the dictionary. In this work, we propose a parametric dictionary learning algorithm to design data-adapted, structured dictionaries that sparsely represent graph signals. In particular, we model graph signals as combinations of overlapping local patterns. We impose the constraint that each dictionary is a concatenation of subdictionaries, with each subdictionary being a polynomial of the graph Laplacian matrix, representing a single pattern translated to different areas of the graph. The learning algorithm adapts the patterns to a training set of graph signals. Experimental results on both synthetic and real datasets demonstrate that the dictionaries learned by the proposed algorithm are competitive with and often better than unstructured dictionaries learned by state-of-the-art numerical learning algorithms in terms of sparse approximation of graph signals. In contrast to the unstructured dictionaries, however, the dictionaries learned by the proposed algorithm feature localized atoms and can be implemented in a computationally efficient manner in signal processing tasks such as compression, denoising, and classification.
|
Finally, we remark that the graph structure is taken into consideration in @cite_21 , not explicitly into the dictionary but rather in the sparse coding coefficients. The authors use the graph Laplacian operator as a regularizer in order to impose that the obtained sparse coding coefficients vary smoothly along the geodesics of the manifold that is captured by the graph. However, the obtained dictionary does not have any particular structure. None of the previous works are able to design dictionaries that provide sparse representations, particularly adapted to a given class of graph signals, and have efficient implementations. This is exactly the objective of our work, where a structured graph signal dictionary is composed of multiple polynomial matrix functions of the graph Laplacian.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2140245639"
],
"abstract": [
"Sparse coding has received an increasing amount of interest in recent years. It is an unsupervised learning algorithm, which finds a basis set capturing high-level semantics in the data and learns sparse coordinates in terms of the basis set. Originally applied to modeling the human visual cortex, sparse coding has been shown useful for many applications. However, most of the existing approaches to sparse coding fail to consider the geometrical structure of the data space. In many real applications, the data is more likely to reside on a low-dimensional submanifold embedded in the high-dimensional ambient space. It has been shown that the geometrical information of the data is important for discrimination. In this paper, we propose a graph based algorithm, called graph regularized sparse coding, to learn the sparse representations that explicitly take into account the local manifold structure of the data. By using graph Laplacian as a smooth operator, the obtained sparse representations vary smoothly along the geodesics of the data manifold. The extensive experimental results on image classification and clustering have demonstrated the effectiveness of our proposed algorithm."
]
}
|
1401.0561
|
2952848219
|
This paper studies the security and memorability of free-form multitouch gestures for mobile authentication. Towards this end, we collected a dataset with a generate-test-retest paradigm where participants (N=63) generated free-form gestures, repeated them, and were later retested for memory. Half of the participants decided to generate one-finger gestures, and the other half generated multi-finger gestures. Although there has been recent work on template-based gestures, there are yet no metrics to analyze security of either template or free-form gestures. For example, entropy-based metrics used for text-based passwords are not suitable for capturing the security and memorability of free-form gestures. Hence, we modify a recently proposed metric for analyzing information capacity of continuous full-body movements for this purpose. Our metric computed estimated mutual information in repeated sets of gestures. Surprisingly, one-finger gestures had higher average mutual information. Gestures with many hard angles and turns had the highest mutual information. The best-remembered gestures included signatures and simple angular shapes. We also implemented a multitouch recognizer to evaluate the practicality of free-form gestures in a real authentication system and how they perform against shoulder surfing attacks. We conclude the paper with strategies for generating secure and memorable free-form gestures, which present a robust method for mobile authentication.
|
3D gesture recognition can be performed, most recently, using camera-based systems (e.g. Kinect) @cite_7 or using wireless signals @cite_11 . With the camera-based systems, a user would trace a gesture out in space and the image gets compressed into a two dimensional image and processed for recognition @cite_7 . @cite_11 have shown that three-dimensional gestures can be recognized by measuring the Doppler shifts between transmitted and received Wi-Fi signals.
|
{
"cite_N": [
"@cite_7",
"@cite_11"
],
"mid": [
"2402746099",
"2091196730"
],
"abstract": [
"Password-based authentication is easy to use but its security is bounded by how much a user can remember. Biometrics-based authentication requires no memorization but ‘resetting’ a biometric password may not always be possible. In this paper, we propose a user-friendly authentication system (KinWrite) that allows users to choose arbitrary, short and easy-to-memorize passwords while providing resilience to password cracking and password theft. KinWrite lets users write their passwords in 3D space and captures the handwriting motion using a low cost motion input sensing device—Kinect. The low resolution and noisy data captured by Kinect, combined with low consistency of in-space handwriting, have made it challenging to verify users. To overcome these challenges, we exploit the Dynamic Time Warping (DTW) algorithm to quantify similarities between handwritten passwords. Our experimental results involving 35 signatures from 18 subjects and a brute-force attacker have shown that KinWrite can achieve a 100 precision and a 70 recall (the worst case) for verifying honest users, encouraging us to carry out a much larger scale study towards designing a foolproof system.",
"This demo presents WiSee, a novel human-computer interaction system that leverages wireless networks (e.g., Wi-Fi), to enable sensing and recognition of human gestures and motion. Since wire- less signals do not require line-of-sight and can traverse through walls, WiSee enables novel human-computer interfaces for remote device control and building automation. Further, it achieves this goal without requiring instrumentation of the human body with sensing devices. We integrate WiSee with applications and demonstrate how WiSee enables users to use gestures and control applications including music players and gaming systems. Specifically, our demo will allow SIGCOMM attendees to control a music player and a lighting control device using gestures."
]
}
|
1401.0561
|
2952848219
|
This paper studies the security and memorability of free-form multitouch gestures for mobile authentication. Towards this end, we collected a dataset with a generate-test-retest paradigm where participants (N=63) generated free-form gestures, repeated them, and were later retested for memory. Half of the participants decided to generate one-finger gestures, and the other half generated multi-finger gestures. Although there has been recent work on template-based gestures, there are yet no metrics to analyze security of either template or free-form gestures. For example, entropy-based metrics used for text-based passwords are not suitable for capturing the security and memorability of free-form gestures. Hence, we modify a recently proposed metric for analyzing information capacity of continuous full-body movements for this purpose. Our metric computed estimated mutual information in repeated sets of gestures. Surprisingly, one-finger gestures had higher average mutual information. Gestures with many hard angles and turns had the highest mutual information. The best-remembered gestures included signatures and simple angular shapes. We also implemented a multitouch recognizer to evaluate the practicality of free-form gestures in a real authentication system and how they perform against shoulder surfing attacks. We conclude the paper with strategies for generating secure and memorable free-form gestures, which present a robust method for mobile authentication.
|
Continuing with security analysis, brute force attacks on gestures have been examined in some studies @cite_7 @cite_24 @cite_39 . @cite_24 have examined the security of 2D gestures against brute force attacks (assisted or otherwise) when using an authentication system where a user will draw a gesture on a picture. A measure of the password space is developed and an algorithm under which a gesture in that space can be attacked. The attack is capable of guessing the password based on areas of the screen that a user would be drawn towards. This study does not concern itself with the security of the gesture drawn, instead it is focused on where a user would target in a picture-based authentication schema -- it does not address free-form gesture authentication. @cite_13 showed that authentication schema based on biometric analysis (including one by @cite_28 ) can be cracked using a robot to brute force the inputs using an algorithm that is supplied swipe input statistics from the general population.
|
{
"cite_N": [
"@cite_7",
"@cite_28",
"@cite_39",
"@cite_24",
"@cite_13"
],
"mid": [
"2402746099",
"2151854612",
"2126453598",
"2131254288",
"1964160137"
],
"abstract": [
"Password-based authentication is easy to use but its security is bounded by how much a user can remember. Biometrics-based authentication requires no memorization but ‘resetting’ a biometric password may not always be possible. In this paper, we propose a user-friendly authentication system (KinWrite) that allows users to choose arbitrary, short and easy-to-memorize passwords while providing resilience to password cracking and password theft. KinWrite lets users write their passwords in 3D space and captures the handwriting motion using a low cost motion input sensing device—Kinect. The low resolution and noisy data captured by Kinect, combined with low consistency of in-space handwriting, have made it challenging to verify users. To overcome these challenges, we exploit the Dynamic Time Warping (DTW) algorithm to quantify similarities between handwritten passwords. Our experimental results involving 35 signatures from 18 subjects and a brute-force attacker have shown that KinWrite can achieve a 100 precision and a 70 recall (the worst case) for verifying honest users, encouraging us to carry out a much larger scale study towards designing a foolproof system.",
"We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0 for intrasession authentication, 2 -3 for intersession authentication, and below 4 when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multimodal biometric authentication system.",
"Starting around 1999, a great many graphical password schemes have been proposed as alternatives to text-based password authentication. We provide a comprehensive overview of published research in the area, covering both usability and security aspects as well as system evaluation. The article first catalogues existing approaches, highlighting novel features of selected schemes and identifying key usability or security advantages. We then review usability requirements for knowledge-based authentication as they apply to graphical passwords, identify security threats that such systems must address and review known attacks, discuss methodological issues related to empirical evaluation, and identify areas for further research and improved methodology.",
"Computing devices with touch-screens have experienced unprecedented growth in recent years. Such an evolutionary advance has been facilitated by various applications that are heavily relying on multi-touch gestures. In addition, picture gesture authentication has been recently introduced as an alternative login experience to text-based password on such devices. In particular, the new Microsoft Windows 8™ operating system adopts such an alternative authentication to complement traditional text-based authentication. In this paper, we present an empirical analysis of picture gesture authentication on more than 10,000 picture passwords collected from over 800 subjects through online user studies. Based on the findings of our user studies, we also propose a novel attack framework that is capable of cracking passwords on previously unseen pictures in a picture gesture authentication system. Our approach is based on the concept of selection function that models users' password selection processes. Our evaluation results show the proposed approach could crack a considerable portion of collected picture passwords under different settings.",
"Touch-based verification --- the use of touch gestures (e.g., swiping, zooming, etc.) to authenticate users of touch screen devices --- has recently been widely evaluated for its potential to serve as a second layer of defense to the PIN lock mechanism. In all performance evaluations of touch-based authentication systems however, researchers have assumed naive (zero-effort) forgeries in which the attacker makes no effort to mimic a given gesture pattern. In this paper we demonstrate that a simple \"Lego\" robot driven by input gleaned from general population swiping statistics can generate forgeries that achieve alarmingly high penetration rates against touch-based authentication systems. Using the best classification algorithms in touch-based authentication, we rigorously explore the effect of the attack, finding that it increases the Equal Error Rates of the classifiers by between 339 and 1004 depending on parameters such as the failure-to-enroll threshold and the type of touch stroke generated by the robot. The paper calls into question the zero-effort impostor testing approach used to benchmark the performance of touch-based authentication systems."
]
}
|
1401.0561
|
2952848219
|
This paper studies the security and memorability of free-form multitouch gestures for mobile authentication. Towards this end, we collected a dataset with a generate-test-retest paradigm where participants (N=63) generated free-form gestures, repeated them, and were later retested for memory. Half of the participants decided to generate one-finger gestures, and the other half generated multi-finger gestures. Although there has been recent work on template-based gestures, there are yet no metrics to analyze security of either template or free-form gestures. For example, entropy-based metrics used for text-based passwords are not suitable for capturing the security and memorability of free-form gestures. Hence, we modify a recently proposed metric for analyzing information capacity of continuous full-body movements for this purpose. Our metric computed estimated mutual information in repeated sets of gestures. Surprisingly, one-finger gestures had higher average mutual information. Gestures with many hard angles and turns had the highest mutual information. The best-remembered gestures included signatures and simple angular shapes. We also implemented a multitouch recognizer to evaluate the practicality of free-form gestures in a real authentication system and how they perform against shoulder surfing attacks. We conclude the paper with strategies for generating secure and memorable free-form gestures, which present a robust method for mobile authentication.
|
Finally, on non-security related work, @cite_2 studied the information capacity of continuous full-body movements. Our metric is motivated by their work. Specifically, they did not study 2D gestures or their security and memorability or use for an authentication system. When asked to create gestures for non-security purposes, previous work @cite_17 @cite_26 indicates that people tend to repeat gestures that are seen on a daily basis and are context-dependent (e.g. that the gestures people perform are dependent on whether they are directing someone to perform a task or receiving directions on a task).
|
{
"cite_N": [
"@cite_26",
"@cite_17",
"@cite_2"
],
"mid": [
"",
"2064460790",
"2002557727"
],
"abstract": [
"",
"This paper explores how interaction with systems using touchless gestures can be made intuitive and natural. Analysis of 912 video clips of gesture production from a user study of 16 subjects communicating transitive actions (manipulation of objects with or without external tools) indicated that 1) dynamic pantomimic gestures where imagined tool object is explicitly held are performed more intuitively and easily than gestures where a body part is used to represent the tool object or compared to static hand poses and 2) gesturing while communicating the transitive action as how the user habitually performs the action (pantomimic action) is perceived to be easier and more natural than gesturing while communicating it as an instruction. These findings provide guidelines for the characteristics of gestures and user mental models one must consciously be concerned with when designing and implementing gesture vocabularies of touchless interaction.",
"We present a novel metric for information capacity of full-body movements. It accommodates HCI scenarios involving continuous movement of multiple limbs. Throughput is calculated as mutual information in repeated motor sequences. It is affected by the complexity of movements and the precision with which an actor reproduces them. Computation requires decorrelating co-dependencies of movement features (e.g., wrist and elbow) and temporal alignment of sequences. HCI researchers can use the metric as an analysis tool when designing and studying user interfaces."
]
}
|
1401.0052
|
2950988509
|
Today people increasingly have the opportunity to opt-in to "usage-based" automotive insurance programs for reducing insurance premiums. In these programs, participants install devices in their vehicles that monitor their driving behavior, which raises some privacy concerns. Some devices collect fine-grained speed data to monitor driving habits. Companies that use these devices claim that their approach is privacy-preserving because speedometer measurements do not have physical locations. However, we show that with knowledge of the user's home location, as the insurance companies have, speed data is sufficient to discover driving routes and destinations when trip data is collected over a period of weeks. To demonstrate the real-world applicability of our approach we applied our algorithm, elastic pathing, to data collected over hundreds of driving trips occurring over several months. With this data and our approach, we were able to predict trip destinations to within 250 meters of ground truth in 10 of the traces and within 500 meters in 20 of the traces. This result, combined with the amount of speed data that is being collected by insurance companies, constitutes a substantial breach of privacy because a person's regular driving pattern can be deduced with repeated examples of the same paths with just a few weeks of monitoring.
|
A lot of work has concentrated on anonymizing or obfuscating @cite_5 @cite_15 @cite_29 @cite_10 @cite_16 location traces because location traces contain a great deal of behavioral information that people consider private @cite_10 @cite_35 @cite_8 @cite_33 . Krumm @cite_1 has written an overview of computational location privacy techniques, and Zang and Bolot @cite_24 have recently questioned the possibility of releasing privacy-preserving cell phone records while still maintaining research utility in those records. Accordingly, predicting future mobility patterns and paths from human mobility traces is a well-explored topic, including the following results.
|
{
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_8",
"@cite_29",
"@cite_1",
"@cite_24",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_10"
],
"mid": [
"2120374653",
"2099680846",
"2032836483",
"1544471796",
"2170166043",
"2045686369",
"2149921703",
"2056773559",
"2141854027",
"2107179184"
],
"abstract": [
"With new technology, people can share information about everyday places they go; the resulting data helps others find and evaluate places. Recent applications like Dodgeball and Sharescape repurpose everyday place information: users create local place data for personal use, and the systems display it for public use. We explore both the opportunities -- new local knowledge, and concerns -- privacy risks, raised by this implicit information sharing. We conduct two empirical studies: subjects create place data when using PlaceMail, a location-based reminder system, and elect whether to share it on Sharescape, a community map-building system. We contribute by: (1) showing location-based reminders yield new local knowledge about a variety of places, (2) identifying heuristics people use when deciding what place-related information to share (and their prevalence), (3) detailing how these decision heuristics can inform local knowledge sharing system design, and (4) identifying new uses of shared place information, notably opportunistic errand planning.",
"Feedback is viewed as an essential element of ubiquitous computing systems in the HCI literature for helping people manage their privacy. However, the success of online social networks and existing commercial systems for mobile location sharing which do not incorporate feedback would seem to call the importance of feedback into question. We investigated this issue in the context of a mobile location sharing system. Specifically, we report on the findings of a field de-ployment of Locyoution, a mobile location sharing system. In our study of 56 users, one group was given feedback in the form of a history of location requests, and a second group was given no feedback at all. Our major contribution has been to show that feedback is an important contributing factor towards improving user comfort levels and allaying privacy concerns. Participants' privacy concerns were reduced after using the mobile location sharing system. Additionally,our study suggests that peer opinion and technical savviness contribute most to whether or not participants thought they would continue to use a mobile location technology.",
"We report on a study (N=36) of user preferences for balancing awareness with privacy. Participants defined permissions for sharing of location, availability, calendar information and instant messaging (IM) activity within an application called mySpace. MySpace is an interactive visualization of the physical workplace that provides dynamic information about people, places and equipment. We found a significant preference for defining privacy permissions at the group level. While \"family\" received high levels of awareness sharing, interestingly, \"team\" was granted comparable levels during business hours at work. Surprisingly, presenting participants with a detailed list of all pieces of personal context to which the system had access, did not result in more conservative privacy settings. Although location was the most sensitive aspect of awareness, participants were comfortable disclosing room-level location information to their team members at work. Our findings suggest utilizing grouping mechanisms to balance privacy control with configuration burden, and argue for increased system transparency to build trust.",
"Simulated, false location reports can be an effective way to confuse a privacy attacker. When a mobile user must transmit his or her location to a central server, these location reports can be accompanied by false reports that, ideally, cannot be distinguished from the true one. The realism of the false reports is important, because otherwise an attacker could filter out all but the real data. Using our database of GPS tracks from over 250 volunteer drivers, we developed probabilistic models of driving behavior and applied the models to create realistic driving trips. The simulations model realistic start and end points, slightly non-optimal routes, realistic driving speeds, and spatially varying GPS noise.",
"This is a literature survey of computational location privacy, meaning computation-based privacy mechanisms that treat location data as geometric information. This definition includes privacy-preserving algorithms like anonymity and obfuscation as well as privacy-breaking algorithms that exploit the geometric nature of the data. The survey omits non-computational techniques like manually inspecting geotagged photos, and it omits techniques like encryption or access control that treat location data as general symbols. The paper reviews studies of peoples' attitudes about location privacy, computational threats on leaked location data, and computational countermeasures for mitigating these threats.",
"We examine a very large-scale data set of more than 30 billion call records made by 25 million cell phone users across all 50 states of the US and attempt to determine to what extent anonymized location data can reveal private user information. Our approach is to infer, from the call records, the \"top N\" locations for each user and correlate this information with publicly-available side information such as census data. For example, the measured \"top 2\" locations likely correspond to home and work locations, the \"top 3\" to home, work, and shopping school commute path locations. We consider the cases where those \"top N\" locations are measured with different levels of granularity, ranging from a cell sector to whole cell, zip code, city, county and state. We then compute the anonymity set, namely the number of users uniquely identified by a given set of \"top N\" locations at different granularity levels. We find that the \"top 1\" location does not typically yield small anonymity sets. However, the top 2 and top 3 locations do, certainly at the sector or cell-level granularity. We consider a variety of different factors that might impact the size of the anonymity set, for example the distance between the \"top N\" locations or the geographic environment (rural vs urban). We also examine to what extent specific side information, in particular the size of the user's social network, decrease the anonymity set and therefore increase risks to privacy. Our study shows that sharing anonymized location data will likely lead to privacy risks and that, at a minimum, the data needs to be coarse in either the time domain (meaning the data is collected over short periods of time, in which case inferring the top N locations reliably is difficult) or the space domain (meaning the data granularity is strictly higher than the cell level). In both cases, the utility of the anonymized location data will be decreased, potentially by a significant amount.",
"A significant and growing class of location-based mobile applications aggregate position data from individual devices at a server and compute aggregate statistics over these position streams. Because these devices can be linked to the movement of individuals, there is significant danger that the aggregate computation will violate the location privacy of individuals. This paper develops and evaluates PrivStats, a system for computing aggregate statistics over location data that simultaneously achieves two properties: first, provable guarantees on location privacy even in the face of any side information about users known to the server, and second, privacy-preserving accountability (i.e., protection against abusive clients uploading large amounts of spurious data). PrivStats achieves these properties using a new protocol for uploading and aggregating data anonymously as well as an efficient zero-knowledge proof of knowledge protocol we developed from scratch for accountability. We implemented our system on Nexus One smartphones and commodity servers. Our experimental results demonstrate that PrivStats is a practical system: computing a common aggregate (e.g., count) over the data of 10,000 clients takes less than 0.46 s at the server and the protocol has modest latency (0.6 s) to upload data from a Nexus phone. We also validated our protocols on real driver traces from the CarTel project.",
"Advances in sensing and tracking technology enable location-based applications but they also create significant privacy risks. Anonymity can provide a high degree of privacy, save service users from dealing with service providers’ privacy policies, and reduce the service providers’ requirements for safeguarding private information. However, guaranteeing anonymous usage of location-based services requires that the precise location information transmitted by a user cannot be easily used to re-identify the subject. This paper presents a middleware architecture and algorithms that can be used by a centralized location broker service. The adaptive algorithms adjust the resolution of location information along spatial or temporal dimensions to meet specified anonymity constraints based on the entities who may be using location services within a given area. Using a model based on automotive traffic counts and cartographic material, we estimate the realistically expected spatial resolution for different anonymity constraints. The median resolution generated by our algorithms is 125 meters. Thus, anonymous location-based requests for urban areas would have the same accuracy currently needed for E-911 services; this would provide sufficient resolution for wayfinding, automated bus routing services and similar location-dependent services.",
"Although the privacy threats and countermeasures associated with location data are well known, there has not been a thorough experiment to assess the effectiveness of either. We examine location data gathered from volunteer subjects to quantify how well four different algorithms can identify the subjects' home locations and then their identities using a freely available, programmable Web search engine. Our procedure can identify at least a small fraction of the subjects and a larger fraction of their home addresses. We then apply three different obscuration countermeasures designed to foil the privacy attacks: spatial cloaking, noise, and rounding. We show how much obscuration is necessary to maintain the privacy of all the subjects.",
"Long-term personal GPS data is useful for many UbiComp services such as traffic monitoring and environmental impact assessment. However, inference attacks on such traces can reveal private information including home addresses and schedules. We asked 32 participants from 12 households to collect 2 months of GPS data, and showed it to them in visualizations. We explored if they understood how their individual privacy concerns mapped onto 5 location obfuscation schemes (which they largely did), which obfuscation schemes they were most comfortable with (Mixing, Deleting data near home, and Randomizing), how they monetarily valued their location data, and if they consented to share their data publicly. 21 32 gave consent to publish their data, though most households' members shared at different levels, which indicates a lack of awareness of privacy interrelationships. Grounded in real decisions about real data, our findings highlight the potential for end-user involvement in obfuscation of their own location data."
]
}
|
1401.0052
|
2950988509
|
Today people increasingly have the opportunity to opt-in to "usage-based" automotive insurance programs for reducing insurance premiums. In these programs, participants install devices in their vehicles that monitor their driving behavior, which raises some privacy concerns. Some devices collect fine-grained speed data to monitor driving habits. Companies that use these devices claim that their approach is privacy-preserving because speedometer measurements do not have physical locations. However, we show that with knowledge of the user's home location, as the insurance companies have, speed data is sufficient to discover driving routes and destinations when trip data is collected over a period of weeks. To demonstrate the real-world applicability of our approach we applied our algorithm, elastic pathing, to data collected over hundreds of driving trips occurring over several months. With this data and our approach, we were able to predict trip destinations to within 250 meters of ground truth in 10 of the traces and within 500 meters in 20 of the traces. This result, combined with the amount of speed data that is being collected by insurance companies, constitutes a substantial breach of privacy because a person's regular driving pattern can be deduced with repeated examples of the same paths with just a few weeks of monitoring.
|
Based on analysis of location data, such as mobile phone cell tower locations, researchers have shown 1) the nature of individual mobility patterns is bounded, and that humans visit only a few locations most of the time (e.g., just two @cite_26 @cite_49 @cite_27 ); 2) that there is high level of predictability for future and current locations (e.g. @cite_11 @cite_6 -- most trivially, 70 visited location coincides with the user's actual location'' @cite_11 ); and 3) that mobility patterns can also be used to predict social network ties (e.g, @cite_44 ). In a more applied domain, GPS traces have been used to learn transportation modes (e.g. walking, in a bus, or driving) @cite_34 , for predicting turns, route, and destination @cite_40 , for predicting family routines (e.g. picking up or dropping of children to activities) @cite_42 , probabilistic models describing when people are home or away @cite_46 , and recommending friends and locations @cite_30 . However, none of the above work can be used to discover locations based solely on speedometer measurements and a user's starting location.
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_42",
"@cite_6",
"@cite_44",
"@cite_27",
"@cite_40",
"@cite_49",
"@cite_46",
"@cite_34",
"@cite_11"
],
"mid": [
"1982397092",
"1982300822",
"",
"",
"2101108259",
"1536564267",
"2115450697",
"",
"1584701184",
"2022749020",
"2171634212"
],
"abstract": [
"The increasing availability of location-acquisition technologies (GPS, GSM networks, etc.) enables people to log the location histories with spatio-temporal data. Such real-world location histories imply, to some extent, users' interests in places, and bring us opportunities to understand the correlation between users and locations. In this article, we move towards this direction and report on a personalized friend and location recommender for the geographical information systems (GIS) on the Web. First, in this recommender system, a particular individual's visits to a geospatial region in the real world are used as their implicit ratings on that region. Second, we measure the similarity between users in terms of their location histories and recommend to each user a group of potential friends in a GIS community. Third, we estimate an individual's interests in a set of unvisited regions by involving his her location history and those of other users. Some unvisited locations that might match their tastes can be recommended to the individual. A framework, referred to as a hierarchical-graph-based similarity measurement (HGSM), is proposed to uniformly model each individual's location history, and effectively measure the similarity among users. In this framework, we take into account three factors: 1) the sequence property of people's outdoor movements, 2) the visited popularity of a geospatial region, and 3) the hierarchical property of geographic spaces. Further, we incorporated a content-based method into a user-based collaborative filtering algorithm, which uses HGSM as the user similarity measure, to estimate the rating of a user on an item. We evaluated this recommender system based on the GPS data collected by 75 subjects over a period of 1 year in the real world. As a result, HGSM outperforms related similarity measures, namely similarity-by-count, cosine similarity, and Pearson similarity measures. Moreover, beyond the item-based CF method and random recommendations, our system provides users with more attractive locations and better user experiences of recommendation.",
"This study used a sample of 100,000 mobile phone users whose trajectory was tracked for six months to study human mobility patterns. Displacements across all users suggest behaviour close to the Levy-flight-like pattern observed previously based on the motion of marked dollar bills, but with a cutoff in the distribution. The origin of the Levy patterns observed in the aggregate data appears to be population heterogeneity and not Levy patterns at the level of the individual.",
"",
"",
"Our understanding of how individual mobility patterns shape and impact the social network is limited, but is essential for a deeper understanding of network dynamics and evolution. This question is largely unexplored, partly due to the difficulty in obtaining large-scale society-wide data that simultaneously capture the dynamical information on individual movements and social interactions. Here we address this challenge for the first time by tracking the trajectories and communication records of 6 Million mobile phone users. We find that the similarity between two individuals' movements strongly correlates with their proximity in the social network. We further investigate how the predictive power hidden in such correlations can be exploited to address a challenging problem: which new links will develop in a social network. We show that mobility measures alone yield surprising predictive power, comparable to traditional network-based measures. Furthermore, the prediction accuracy can be significantly improved by learning a supervised classifier based on combined mobility and network measures. We believe our findings on the interplay of mobility patterns and social ties offer new perspectives on not only link prediction but also network dynamics.",
"Many applications benefit from user location data, but location data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual's home and workplace can both be deduced from a location trace, then the median size of the individual's anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in different regions can be re-identified even more easily. Our results show that the threat of re-identification for location data is much greater when the individual's home and work locations can both be deduced from the data. To preserve anonymity, we offer guidance for obfuscating location traces before they are disclosed.",
"We present PROCAB, an efficient method for Probabilistically Reasoning from Observed Context-Aware Behavior. It models the context-dependent utilities and underlying reasons that people take different actions. The model generalizes to unseen situations and scales to incorporate rich contextual information. We train our model using the route preferences of 25 taxi drivers demonstrated in over 100,000 miles of collected data, and demonstrate the performance of our model by inferring: (1) decision at next intersection, (2) route to known destination, and (3) destination given partially traveled route.",
"",
"Many potential pervasive computing applications could use predictions of when a person will be at a certain place. Using a survey and GPS data from 34 participants in 11 households, we develop and test algorithms for predicting when a person will be at home or away. We show that our participants' self-reported home away schedules are not very accurate, and we introduce a probabilistic home away schedule computed from observed GPS data. The computation includes smoothing and a soft schedule template. We show how the probabilistic schedule outperforms both the self-reported schedule and an algorithm based on driving time. We also show how to combine our algorithm with the best part of the drive time algorithm for a slight boost in performance.",
"User mobility has given rise to a variety of Web applications, in which the global positioning system (GPS) plays many important roles in bridging between these applications and end users. As a kind of human behavior, transportation modes, such as walking and driving, can provide pervasive computing systems with more contextual information and enrich a user's mobility with informative knowledge. In this article, we report on an approach based on supervised learning to automatically infer users' transportation modes, including driving, walking, taking a bus and riding a bike, from raw GPS logs. Our approach consists of three parts: a change point-based segmentation method, an inference model and a graph-based post-processing algorithm. First, we propose a change point-based segmentation method to partition each GPS trajectory into separate segments of different transportation modes. Second, from each segment, we identify a set of sophisticated features, which are not affected by differing traffic conditions (e.g., a person's direction when in a car is constrained more by the road than any change in traffic conditions). Later, these features are fed to a generative inference model to classify the segments of different modes. Third, we conduct graph-based postprocessing to further improve the inference performance. This postprocessing algorithm considers both the commonsense constraints of the real world and typical user behaviors based on locations in a probabilistic manner. The advantages of our method over the related works include three aspects. (1) Our approach can effectively segment trajectories containing multiple transportation modes. (2) Our work mined the location constraints from user-generated GPS logs, while being independent of additional sensor data and map information like road networks and bus stops. (3) The model learned from the dataset of some users can be applied to infer GPS data from others. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change-point-based segmentation method and Decision Tree-based inference model, we achieved prediction accuracy greater than 71 percent. Further, using the graph-based post-processing algorithm, the performance attained a 4-percent enhancement.",
"Longitudinal behavioral data generally contains a significant amount of structure. In this work, we identify the structure inherent in daily behavior with models that can accurately analyze, predict, and cluster multimodal data from individuals and communities within the social network of a population. We represent this behavioral structure by the principal components of the complete behavioral dataset, a set of characteristic vectors we have termed eigenbehaviors. In our model, an individual’s behavior over a specific day can be approximated by a weighted sum of his or her primary eigenbehaviors. When these weights are calculated halfway through a day, they can be used to predict the day’s remaining behaviors with 79 accuracy for our test subjects. Additionally, we demonstrate the potential for this dimensionality reduction technique to infer community affiliations within the subjects’ social network by clustering individuals into a “behavior space” spanned by a set of their aggregate eigenbehaviors. These behavior spaces make it possible to determine the behavioral similarity between both individuals and groups, enabling 96 classification accuracy of community affiliations within the population-level social network. Additionally, the distance between individuals in the behavior space can be used as an estimate for relational ties such as friendship, suggesting strong behavioral homophily amongst the subjects. This approach capitalizes on the large amount of rich data previously captured during the Reality Mining study from mobile phones continuously logging location, proximate phones, and communication of 100 subjects at MIT over the course of 9 months. As wearable sensors continue to generate these types of rich, longitudinal datasets, dimensionality reduction techniques such as eigenbehaviors will play an increasingly important role in behavioral research."
]
}
|
1401.0052
|
2950988509
|
Today people increasingly have the opportunity to opt-in to "usage-based" automotive insurance programs for reducing insurance premiums. In these programs, participants install devices in their vehicles that monitor their driving behavior, which raises some privacy concerns. Some devices collect fine-grained speed data to monitor driving habits. Companies that use these devices claim that their approach is privacy-preserving because speedometer measurements do not have physical locations. However, we show that with knowledge of the user's home location, as the insurance companies have, speed data is sufficient to discover driving routes and destinations when trip data is collected over a period of weeks. To demonstrate the real-world applicability of our approach we applied our algorithm, elastic pathing, to data collected over hundreds of driving trips occurring over several months. With this data and our approach, we were able to predict trip destinations to within 250 meters of ground truth in 10 of the traces and within 500 meters in 20 of the traces. This result, combined with the amount of speed data that is being collected by insurance companies, constitutes a substantial breach of privacy because a person's regular driving pattern can be deduced with repeated examples of the same paths with just a few weeks of monitoring.
|
Related work in the field of dead reckoning does suggest that speedometer traces should have some level of information that can be extracted. Dead reckoning works by using speed or movement data and a known location to deduce a movement path. Dead reckoning has typically been used for map building with mobile robots @cite_2 or as an addition to Global Navigation Satellite Systems (GNSS) such as GPS @cite_41 @cite_20 . When supplementing GNSS data, odometer data for dead reckoning might only be used when GNSS information is unavailable, such as when a vehicle passes through a tunnel or an area with many tall buildings. However, this kind of dead reckoning cannot work without frequent location ground truths as there is no perfect method to match speed data to turns in a road.
|
{
"cite_N": [
"@cite_41",
"@cite_20",
"@cite_2"
],
"mid": [
"2544782254",
"2142760360",
"2145265734"
],
"abstract": [
"A Kalman filter has been developed to integrate the three positioning systems (differential odometer dead reckoning, map matching, and Global Positioning System or GPS) used in the Automatic Vehicle Location System (AVL 2000) being designed and developed in the Department of Surveying Engineering at the University of Calgary. The system is being targeted for on road applications and incorporates a digital map. The filter has been designed to take into account uncertainties via covariance matrices. In wide-open spaces GPS positioning will dominate, while in zones where the GPS signal is obstructed, dead reckoning will be used as interpolation between GPS position fixes. Simulation studies and covariance analyses have been performed on a test route located in a sector of the city of Calgary. >",
"This paper describes the features of an extended Kalman filter algorithm designed to support the navigational function of a real-time vehicle performance and emissions monitoring system currently under development. The Kalman filter is used to process global positioning system (GPS) data enhanced with dead reckoning (DR) in an integrated mode, to provide continuous positioning in built-up areas. The dynamic model and filter algorithms are discussed in detail, followed by the findings based on computer simulations and a limited field trial carried out in the Greater London area. The results demonstrate that use of the extended Kalman filter algorithm enables the integrated system employing GPS and low cost DR devices to meet the required navigation performance of the device under development.",
"A major problem in map building is due to the imprecision of sensor measures. In this paper we propose a technique, called elastic correction, for correcting the dead-reckoning errors made during the exploration of an unknown environment by a robot capable of identifying landmarks. Knowledge of the environment being acquired is modelled by a relational graph whose vertices and arcs represent, respectively, landmarks and inter-landmark routes. Elastic correction is based on an analogy between this graph and a mechanical structure: the map is regarded as a truss where each route is an elastic bar and each landmark a node. Errors are corrected as a result of the deformations induced from the forces arising within the structure as inconsistent measures are taken. The uncertainty on odometry is modelled by the elasticity parameters characterizing the structure."
]
}
|
1401.0052
|
2950988509
|
Today people increasingly have the opportunity to opt-in to "usage-based" automotive insurance programs for reducing insurance premiums. In these programs, participants install devices in their vehicles that monitor their driving behavior, which raises some privacy concerns. Some devices collect fine-grained speed data to monitor driving habits. Companies that use these devices claim that their approach is privacy-preserving because speedometer measurements do not have physical locations. However, we show that with knowledge of the user's home location, as the insurance companies have, speed data is sufficient to discover driving routes and destinations when trip data is collected over a period of weeks. To demonstrate the real-world applicability of our approach we applied our algorithm, elastic pathing, to data collected over hundreds of driving trips occurring over several months. With this data and our approach, we were able to predict trip destinations to within 250 meters of ground truth in 10 of the traces and within 500 meters in 20 of the traces. This result, combined with the amount of speed data that is being collected by insurance companies, constitutes a substantial breach of privacy because a person's regular driving pattern can be deduced with repeated examples of the same paths with just a few weeks of monitoring.
|
Map building with robots is interesting because there are often no exact ground truths, only location estimates. @cite_2 describe a method that can assemble maps from arbitrary distance estimates to and from landmarks. The problem of deducing a traveled path from only speed data and a starting point is, in some ways, the reverse of this map building problem: we already have the map, but have difficulty identifying landmarks (in this case turns) from just speed data.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2145265734"
],
"abstract": [
"A major problem in map building is due to the imprecision of sensor measures. In this paper we propose a technique, called elastic correction, for correcting the dead-reckoning errors made during the exploration of an unknown environment by a robot capable of identifying landmarks. Knowledge of the environment being acquired is modelled by a relational graph whose vertices and arcs represent, respectively, landmarks and inter-landmark routes. Elastic correction is based on an analogy between this graph and a mechanical structure: the map is regarded as a truss where each route is an elastic bar and each landmark a node. Errors are corrected as a result of the deformations induced from the forces arising within the structure as inconsistent measures are taken. The uncertainty on odometry is modelled by the elasticity parameters characterizing the structure."
]
}
|
1401.0052
|
2950988509
|
Today people increasingly have the opportunity to opt-in to "usage-based" automotive insurance programs for reducing insurance premiums. In these programs, participants install devices in their vehicles that monitor their driving behavior, which raises some privacy concerns. Some devices collect fine-grained speed data to monitor driving habits. Companies that use these devices claim that their approach is privacy-preserving because speedometer measurements do not have physical locations. However, we show that with knowledge of the user's home location, as the insurance companies have, speed data is sufficient to discover driving routes and destinations when trip data is collected over a period of weeks. To demonstrate the real-world applicability of our approach we applied our algorithm, elastic pathing, to data collected over hundreds of driving trips occurring over several months. With this data and our approach, we were able to predict trip destinations to within 250 meters of ground truth in 10 of the traces and within 500 meters in 20 of the traces. This result, combined with the amount of speed data that is being collected by insurance companies, constitutes a substantial breach of privacy because a person's regular driving pattern can be deduced with repeated examples of the same paths with just a few weeks of monitoring.
|
Ubiquitous computing devices help people during their daily lives in different contexts, for instance by monitoring our activity and fitness levels or interesting events social that we have attended. However, the ubiquitous nature of these monitoring devices has security and privacy ramifications that are not always considered. For example, a lack of sufficient protection in the Nike+iPod sport kit allowed other people to wirelessly monitor and track users during their daily activities @cite_4 . Although these kinds of devices clearly appeal to consumers, the potential loss of privacy is a growing concern.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2116762892"
],
"abstract": [
"We analyze three new consumer electronic gadgets in order to gauge the privacy and security trends in mass-market UbiComp devices. Our study of the Slingbox Pro uncovers a new information leakage vector for encrypted streaming multimedia. By exploiting properties of variable bitrate encoding schemes, we show that a passive adversary can determine with high probability the movie that a user is watching via her Slingbox, even when the Slingbox uses encryption. We experimentally evaluated our method against a database of over 100 hours of network traces for 26 distinct movies. Despite an opportunity to provide significantly more location privacy than existing devices, like RFIDs, we find that an attacker can trivially exploit the Nike+iPod Sport Kit's design to track users; we demonstrate this with a GoogleMaps-based distributed surveillance system. We also uncover security issues with the way Microsoft Zunes manage their social relationships. We show how these products' designers could have significantly raised the bar against some of our attacks. We also use some of our attacks to motivate fundamental security and privacy challenges for future UbiComp devices."
]
}
|
1401.0052
|
2950988509
|
Today people increasingly have the opportunity to opt-in to "usage-based" automotive insurance programs for reducing insurance premiums. In these programs, participants install devices in their vehicles that monitor their driving behavior, which raises some privacy concerns. Some devices collect fine-grained speed data to monitor driving habits. Companies that use these devices claim that their approach is privacy-preserving because speedometer measurements do not have physical locations. However, we show that with knowledge of the user's home location, as the insurance companies have, speed data is sufficient to discover driving routes and destinations when trip data is collected over a period of weeks. To demonstrate the real-world applicability of our approach we applied our algorithm, elastic pathing, to data collected over hundreds of driving trips occurring over several months. With this data and our approach, we were able to predict trip destinations to within 250 meters of ground truth in 10 of the traces and within 500 meters in 20 of the traces. This result, combined with the amount of speed data that is being collected by insurance companies, constitutes a substantial breach of privacy because a person's regular driving pattern can be deduced with repeated examples of the same paths with just a few weeks of monitoring.
|
In the specific area of usage-based insurance, the privacy concerns of these insurance programs have been studied before by @cite_28 . However, this work dates back to schemes which would send raw location (GPS) coordinates to either insurance providers or brokers, and proposed a cryptographic scheme PriPAYD @cite_28 to address the problem. Our work shows that speedometer-based solutions, which were not considered by , are not privacy-preserving either. Finally, most closely related to our work are side-channel attacks that use accelerometer data from smartphones @cite_50 . Projects such as ACComplice @cite_25 and AutoWitness @cite_9 have used accelerometers and gyroscopes for localization of drivers. However, the information from the smartphone can be used to actually detect when turns occur. In contrast, we have only a time series of speed data available. While this speed data might indicate when a vehicle stops or slows down at an intersection, unlike accelerometer data, it does not indicate if any turn is taken.
|
{
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_25",
"@cite_50"
],
"mid": [
"2103503676",
"2096416669",
"2032616120",
"2107816859"
],
"abstract": [
"Pay-As-You-Drive insurance schemes are establishing themselves as the future of car insurance. However, their current implementations, in which fine-grained location data are sent to insurers, entail a serious privacy risk. We present PriPAYD, a system where the premium calculations are performed locally in the vehicle, and only aggregated data are sent to the insurance company, without leaking location information. Our design is based on well-understood security techniques that ensure its correct functioning. We discuss the viability of PriPAYD in terms of cost, security, and ease of certification. We demonstrate that PriPAYD is possible through a proof-of-concept implementation that shows how privacy can be obtained at a very reasonable extra cost.",
"We present AutoWitness, a system to deter, detect, and track personal property theft, improve historically dismal stolen property recovery rates, and disrupt stolen property distribution networks. A property owner embeds a small tag inside the asset to be protected, where the tag lies dormant until it detects vehicular movement. Once moved, the tag uses inertial sensor-based dead reckoning to estimate position changes, but to reduce integration errors, the relative position is reset whenever the sensors indicate the vehicle has stopped. The sequence of movements, stops, and turns are logged in compact form and eventually transferred to a server using a cellular modem after both sufficient time has passed (to avoid detection) and RF power is detectable (hinting cellular access may be available). Eventually, the trajectory data are sent to a server which attempts to match a path to the observations. The algorithm uses a Hidden Markov Model of city streets and Viterbi decoding to estimate the most likely path. The proposed design leverages low-power radios and inertial sensors, is immune to intransit cloaking, and supports post hoc path reconstruction. Our prototype demonstrates technical viability of the design; the volume market forces driving machine-to-machine communications will soon make the design economically viable.",
"The security and privacy risks posed by smartphone sensors such as microphones and cameras have been well documented. However, the importance of accelerometers have been largely ignored. We show that accelerometer readings can be used to infer the trajectory and starting point of an individual who is driving. This raises concerns for two main reasons. First, unauthorized access to an individual's location is a serious invasion of privacy and security. Second, current smartphone operating systems allow any application to observe accelerometer readings without requiring special privileges. We demonstrate that accelerometers can be used to locate a device owner to within a 200 meter radius of the true location. Our results are comparable to the typical accuracy for handheld global positioning systems.",
"Modern smartphones are equipped with a plethora of sensors that enable a wide range of interactions, but some of these sensors can be employed as a side channel to surreptitiously learn about user input. In this paper, we show that the accelerometer sensor can also be employed as a high-bandwidth side channel; particularly, we demonstrate how to use the accelerometer sensor to learn user tap- and gesture-based input as required to unlock smartphones using a PIN password or Android's graphical password pattern. Using data collected from a diverse group of 24 users in controlled (while sitting) and uncontrolled (while walking) settings, we develop sample rate independent features for accelerometer readings based on signal processing and polynomial fitting techniques. In controlled settings, our prediction model can on average classify the PIN entered 43 of the time and pattern 73 of the time within 5 attempts when selecting from a test set of 50 PINs and 50 patterns. In uncontrolled settings, while users are walking, our model can still classify 20 of the PINs and 40 of the patterns within 5 attempts. We additionally explore the possibility of constructing an accelerometer-reading-to-input dictionary and find that such dictionaries would be greatly challenged by movement-noise and cross-user training."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.