aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1309.2676
|
2081300664
|
Compressive sampling (CoSa) has provided many methods for signal recovery of signals compressible with respect to an orthonormal basis. However, modern applications have sparked the emergence of approaches for signals not sparse in an orthonormal basis but in some arbitrary, perhaps highly overcomplete, dictionary. Recently, several "signal-space" greedy methods have been proposed to address signal recovery in this setting. However, such methods inherently rely on the existence of fast and accurate projections which allow one to identify the most relevant atoms in a dictionary for any given signal, up to a very strict accuracy. When the dictionary is highly overcomplete, no such projections are currently known; the requirements on such projections do not even hold for incoherent or well-behaved dictionaries. In this work, we provide an alternate analysis for signal space greedy methods which enforce assumptions on these projections which hold in several settings including those when the dictionary is incoherent or structurally coherent. These results align more closely with traditional results in the standard CoSa literature and improve upon previous work in the signal space setting.
|
The importance of using and analyzing two separate projection schemes in sparse recovery is also discussed in an independent line of work by Hegde et.al. @cite_8 . There, the authors call the two projections the head'' and tail'' projections, and analyze a variant of Iterative Hard Thresholding (IHT) for signal recovery under the Model-RIP, a generalization of the @math -RIP. In fact, they show that without a projection satisfying essentially the second inequality of , conventional IHT will fail.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2962761750"
],
"abstract": [
"Abstract Compressive sensing (CS) has recently emerged as a framework for efficiently capturing signals that are sparse or compressible in an appropriate basis. While often motivated as an alternative to Nyquist-rate sampling, there remains a gap between the discrete, finite-dimensional CS framework and the problem of acquiring a continuous-time signal. In this paper, we attempt to bridge this gap by exploiting the Discrete Prolate Spheroidal Sequences (DPSSʼs), a collection of functions that trace back to the seminal work by Slepian, Landau, and Pollack on the effects of time-limiting and bandlimiting operations. DPSSʼs form a highly efficient basis for sampled bandlimited functions; by modulating and merging DPSS bases, we obtain a dictionary that offers high-quality sparse approximations for most sampled multiband signals. This multiband modulated DPSS dictionary can be readily incorporated into the CS framework. We provide theoretical guarantees and practical insight into the use of this dictionary for recovery of sampled multiband signals from compressive measurements."
]
}
|
1309.2676
|
2081300664
|
Compressive sampling (CoSa) has provided many methods for signal recovery of signals compressible with respect to an orthonormal basis. However, modern applications have sparked the emergence of approaches for signals not sparse in an orthonormal basis but in some arbitrary, perhaps highly overcomplete, dictionary. Recently, several "signal-space" greedy methods have been proposed to address signal recovery in this setting. However, such methods inherently rely on the existence of fast and accurate projections which allow one to identify the most relevant atoms in a dictionary for any given signal, up to a very strict accuracy. When the dictionary is highly overcomplete, no such projections are currently known; the requirements on such projections do not even hold for incoherent or well-behaved dictionaries. In this work, we provide an alternate analysis for signal space greedy methods which enforce assumptions on these projections which hold in several settings including those when the dictionary is incoherent or structurally coherent. These results align more closely with traditional results in the standard CoSa literature and improve upon previous work in the signal space setting.
|
It would be also important to mention the relation of the @math -OMP and @math -thresholding algorithms (Algorithms and ) to the methods proposed in @cite_1 @cite_19 . The notion of excluding coherent atoms in the process of building the representation is used also within these works. In particular, without the extension step, the @math -OMP and @math -threshodling techniques share a great similarity with the Band-Excluded OMP (BOMP) and Band-Excluded Matched Thresholding (BMT) methods in @cite_1 and the heuristic coherence-inhibiting sparse approximation strategy in @cite_40 . As we have seen in , the use of the extension step deteriorates the performance in the case of separated coefficients as a larger support is processed and therefore the RIP conditions are harder to be satisfied. It is likely that using the techniques in @cite_1 @cite_19 would be better suited to deal with separated coefficient vectors.
|
{
"cite_N": [
"@cite_19",
"@cite_40",
"@cite_1"
],
"mid": [
"",
"2133285942",
"2169892604"
],
"abstract": [
"",
"Compressive sensing (CS) is a new approach to simultaneous sensing and compression of sparse and compressible signals based on randomized dimensionality reduction. To recover a signal from its compressive measurements, standard CS algorithms seek the sparsest signal in some discrete basis or frame that agrees with the measurements. A great many applications feature smooth or modulated signals that are frequency-sparse and can be modeled as a superposition of a small number of sinusoids; for such signals, the discrete Fourier transform (DFT) basis is a natural choice for CS recovery. Unfortunately, such signals are only sparse in the DFT domain when the sinusoid frequencies live precisely at the centers of the DFT bins; when this is not the case, CS recovery performance degrades signicantly. In this paper, we introduce the spectral CS (SCS) recovery framework for arbitrary frequencysparse signals. The key ingredients are an over-sampled DFT frame and a restricted unionof-subspaces signal model that inhibits closely spaced sinusoids. We demonstrate that SCS signicantly outperforms current state-of-the-art CS algorithms based on the DFT while providing provable bounds on the number of measurements required for stable recovery. We also leverage line spectral estimation methods (specically Thomson’s multitaper method",
"Highly coherent sensing matrices arise in discretization of continuum imaging problems such as radar and medical imaging when the grid spacing is below the Rayleigh threshold. Algorithms based on techniques of band exclusion (BE) and local optimization (LO) are proposed to deal with such coherent sensing matrices. These techniques are embedded in the existing compressed sensing algorithms, such as Orthogonal Matching Pursuit (OMP), Subspace Pursuit (SP), Iterative Hard Thresholding (IHT), Basis Pursuit (BP), and Lasso, and result in the modified algorithms BLOOMP, BLOSP, BLOIHT, BP-BLOT, and Lasso-BLOT, respectively. Under appropriate conditions, it is proved that BLOOMP can reconstruct sparse, widely separated objects up to one Rayleigh length in the Bottleneck distance independent of the grid spacing. One of the most distinguishing attributes of BLOOMP is its capability of dealing with large dynamic ranges. The BLO-based algorithms are systematically tested with respect to four performance metrics: dynamic range, noise stability, sparsity, and resolution. With respect to dynamic range and noise stability, BLOOMP is the best performer. With respect to sparsity, BLOOMP is the best performer for high dynamic range, while for dynamic range near unity BP-BLOT and Lasso-BLOT with the optimized regularization parameter have the best performance. In the noiseless case, BP-BLOT has the highest resolving power up to certain dynamic range. The algorithms BLOSP and BLOIHT are good alternatives to BLOOMP and BP Lasso-BLOT: they are faster than both BLOOMP and BP Lasso-BLOT and share, to a lesser degree, BLOOMP's amazing attribute with respect to dynamic range. Detailed comparisons with the algorithms Spectral Iterative Hard Thresholding (SIHT) and the frame-adapted BP demonstrate the superiority of the BLO-based algorithms for the problem of sparse approximation in terms of highly coherent, redundant dictionaries."
]
}
|
1309.2444
|
2763550103
|
Federations among sets of Cloud Providers (CPs), whereby a set of CPs agree to mutually use their own resources to run the VMs of other CPs, are considered a promising solution to the problem of reducing the energy cost. In this paper, we address the problem of federation formation for a set of CPs, whose solution is necessary to exploit the potential of cloud federations for the reduction of the energy bill. We devise a distributed algorithm, based on cooperative game theory, that allows a set of CPs to cooperatively set up their federations in such a way that their individual profit is increased with respect to the case in which they work in isolation, and we show that, by using our algorithm and the proposed CPs' utility function, they are able to self-organize into Nash-stable federations and, by means of iterated executions, to adapt themselves to environmental changes. Numerical results are presented to demonstrate the effectiveness of the proposed algorithm.
|
Recently, the concept of cloud federations @cite_2 @cite_13 has been proposed as a way to provide individual CPs with more flexibility when allocating on-demand workloads. Existing work on cloud federations has been mainly focused on the development of architectural models for federations @cite_5 , and of mechanisms providing specific functionalities (e.g., workload management @cite_40 @cite_22 , accounting and billing @cite_35 , and pricing @cite_33 @cite_27 @cite_15 @cite_20 ).
|
{
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_33",
"@cite_40",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_20"
],
"mid": [
"2160655991",
"2114850763",
"2038973419",
"2138602095",
"32829342",
"1983915551",
"2005865975",
"2133848151",
"2122802010",
"2144636394"
],
"abstract": [
"Emerging Cloud computing infrastructures provide computing resources on demand based on postpaid principles. For example, the RESERVOIR project develops an infrastructure capable of delivering elastic capacity that can automatically be increased or decreased in order to cost-efficiently fulfill established Service Level Agreements. This infrastructure also makes it possible for a data center to extend its total capacity by subcontracting additional resources from collaborating data centers, making the infrastructure a federation of Clouds. For accounting and billing, such infrastructures call for novel approaches to perform accounting for capacity that varies over time and for services (or more precisely virtual machines) that migrate between physical machines or even between data centers. For billing, needs arise for new approaches to simultaneously manage postpaid and prepaid payment schemes for capacity that varies over time in response to user needs. In this paper, we outline usage scenarios and a set of requirements for such infrastructures, and propose an accounting and billing architecture to be used within RESERVOIR. Even though the primary focus for this architecture is accounting and billing between resource consumers and infrastructure provides, future support for inter-site billing is also taken into account.",
"Cloud infrastructure providers may form Cloud federations to cope with peaks in resource demand and to make large-scale service management simpler for service providers. To realize Cloud federations, a number of technical and managerial difficulties need to be solved. We present ongoing work addressing three related key management topics, namely, specification, scheduling, and monitoring of services. Service providers need to be able to influence how their resources are placed in Cloud federations, as federations may cross national borders or include companies in direct competition with the service provider. Based on related work in the RESERVOIR project, we propose a way to define service structure and placement restrictions using hierarchical directed acyclic graphs. We define a model for scheduling in Cloud federations that abides by the specified placement constraints and minimizes the risk of violating Service-Level Agreements. We present a heuristic that helps the model determine which virtual machines (VMs) are suitable candidates for migration. To aid the scheduler, and to provide unified data to service providers, we also propose a monitoring data distribution architecture that introduces cross-site compatibility by means of semantic metadata annotations.",
"Distributed resource allocation is a very important and complex problem in emerging horizontal dynamic cloud federation (HDCF) platforms, where different cloud providers (CPs) collaborate dynamically to gain economies of scale and enlargements of their virtual machine (VM) infrastructure capabilities in order to meet consumer requirements. HDCF platforms differ from the existing vertical supply chain federation (VSCF) models in terms of establishing federation and dynamic pricing. There is a need to develop algorithms that can capture this complexity and easily solve distributed VM resource allocation problem in a HDCF platform. In this paper, we propose a cooperative game-theoretic solution that is mutually beneficial to the CPs. It is shown that in non-cooperative environment, the optimal aggregated benefit received by the CPs is not guaranteed. We study two utility maximizing cooperative resource allocation games in a HDCF environment. We use price-based resource allocation strategy and present both centralized and distributed algorithms to find optimal solutions to these games. Various simulations were carried out to verify the proposed algorithms. The simulation results demonstrate that the algorithms are effective, showing robust performance for resource allocation and requiring minimal computation time.",
"Balava is a new system for managing computations that span multiple clouds and involve data with confidentiality constraints. This paper describes the design, implementation and initial performance evaluation of Balava building-blocks. We detail the run-time developed to interconnect private and public clouds, and present a storage overlay built on top of this run-time. To support low-overhead execution of Balava computations, we are investigating alternative approaches to virtualization. We present a new hyper visor that supports light-weight virtual environments, while also preserving application binary interfaces.",
"In order to answer the question whether or not to utilize the cloud for processing, this paper aims at identifying characteristics of potential cloud beneficiaries and advisable actions to actually gain financial benefits. A game-theoretic model of an Infrastructure-as-a-Service (IaaS) cloud market, covering dynamics of pricing and usage, is suggested. Incorporating the possibility of hybrid clouds (clouds plus own infrastructure) into this model turns out essential for cloud computing being significantly in favor of not only the provider but the client as well. Parameters like load profiles and economy of scale have a huge effect on likely future pricing as well as on a cost-optimal split-up of client demand between a client's own data center and a public cloud service.",
"As cloud computing becomes more predominant, the problem of scalability has become critical for cloud computing providers. The cloud paradigm is attractive because it offers a dramatic reduction in capital and operation expenses for consumers.",
"We present fundamental challenges for scalable and dependable service platforms and architectures that enable flexible and dynamic provisioning of cloud services. Our findings are incorporated in a toolkit targeting the cloud service and infrastructure providers. The innovations behind the toolkit are aimed at optimizing the whole service life cycle, including service construction, deployment, and operation, on a basis of aspects such as trust, risk, eco-efficiency and cost. Notably, adaptive self-preservation is crucial to meet predicted and unforeseen changes in resource requirements. By addressing the whole service life cycle, taking into account several cloud architectures, and by taking a holistic approach to sustainable service provisioning, the toolkit aims to provide a foundation for a reliable, sustainable, and trustful cloud computing industry.",
"Current large distributed systems allow users to share and trade resources. In cloud computing, users purchase different types of resources from one or more resource providers using a fixed pricing scheme. Federated clouds, a topic of recent interest, allows different cloud providers to share resources for increased scalability and reliability. However, users and providers of cloud resources are rational and maximize their own interest when consuming and contributing shared resources. In this paper, we present a dyanmic pricing scheme suitable for rational users requests containing multiple resource types. Using simulations, we compare the efficiency of our proposed strategy-proof dynamic scheme with fixed pricing, and show that user welfare and the percentage of successful requests is increased by using dynamic pricing.",
"As a key component in a modern datacenter, the cloud operating system is responsible for managing the physical and virtual infrastructure, orchestrating and commanding service provisioning and deployment, and providing federation capabilities for accessing and deploying virtual resources in remote cloud infrastructures.",
"Cloud Federation is a recent paradigm that helps Infrastructure as a Service (IaaS) providers to overcome resource limitation during spikes in demand for Virtual Machines (VMs) by outsourcing requests to other federation members. IaaS providers also have the option of terminating spot VMs, i.e, cheaper VMs that can be canceled to free resources for more profitable VM requests. By both approaches, providers can expect to reject less profitable requests. For IaaS providers, pricing and profit are two important factors, in addition to maintaining a high Quality of Service (QoS) and utilization of their resources to remain in the business. For this, a clear understanding of the usage pattern, types of requests, and infrastructure costs are necessary while making decisions to terminate spot VMs, outsourcing or contributing to the federation. In this paper, we propose policies that help in the decision-making process to increase resources utilization and profit. Simulation results indicate that the proposed policies enhance the profit, utilization, and QoS (smaller number of rejected VM requests) in a Cloud federation environment."
]
}
|
1309.2444
|
2763550103
|
Federations among sets of Cloud Providers (CPs), whereby a set of CPs agree to mutually use their own resources to run the VMs of other CPs, are considered a promising solution to the problem of reducing the energy cost. In this paper, we address the problem of federation formation for a set of CPs, whose solution is necessary to exploit the potential of cloud federations for the reduction of the energy bill. We devise a distributed algorithm, based on cooperative game theory, that allows a set of CPs to cooperatively set up their federations in such a way that their individual profit is increased with respect to the case in which they work in isolation, and we show that, by using our algorithm and the proposed CPs' utility function, they are able to self-organize into Nash-stable federations and, by means of iterated executions, to adapt themselves to environmental changes. Numerical results are presented to demonstrate the effectiveness of the proposed algorithm.
|
To the best of our knowledge, very little work has been carried out to jointly tackle the problem of dynamically forming stable cloud federations for energy-aware resource provisioning. Indeed, much of the existing work only focuses on a single aspect of the problem. In @cite_29 , the design and implementation of a VM scheduler for a federation of CPs is presented. The scheduler, in addition to manage resources that are local to each CP, is able to decide when to rent resources from other CPs, when to lease own idle resources to other CPs, and when to turn on or off local physical resources. Unlike our work, this work does not consider the problem of forming stable CP federations. In @cite_42 , a cooperative game-theoretic model for federation formation and VM management is proposed. In this work, the federation formation among CPs is analyzed using the concept of network games, but the energy minimization problem is not considered.
|
{
"cite_N": [
"@cite_29",
"@cite_42"
],
"mid": [
"2160759477",
"2014055567"
],
"abstract": [
"Resource provisioning in Cloud providers is a challenge because of the high variability of load over time. On the one hand, the providers can serve most of the requests owning only a restricted amount of resources, but this forces to reject customers during peak hours. On the other hand, valley hours incur in under-utilization of the resources, which forces the providers to increase their prices to be profitable. Federation overcomes these limitations and allows providers to dynamically outsource resources to others in response to demand variations. Furthermore, it allows providers with underused resources to rent them to other providers. Both techniques make the provider getting more profit when used adequately. Federation of Cloud providers requires having a clear understanding of the consequences of each decision. In this paper, we present a characterization of providers operating in a federated Cloud which helps to choose the most convenient decision depending on the environment conditions. These include when to outsource to other providers, rent free resources to other providers (i.e., insourcing), or turn off unused nodes to save power. We characterize these decisions as a function of several parameters and implement a federated provider that uses this characterization to exploit federation. Finally, we evaluate the profitability of using these techniques using the data from a real provider.",
"In cloud computing, organizations can form cooperation to share the available resource to reduce the cost. This is referred to as the multi-organization cloud computing environment. In this paper, we address the issues of virtual machine management and cooperation formation in such an environment. First, for the cooperative organizations, an optimization model is formulated and solved for the optimal virtual machine allocation so that the total cost is minimized. Then, the cost management based on cooperative game theory is applied to obtain the fair share of the cost. Second, the cooperation formation among organizations is analyzed using the network game. With the dynamic cooperation formation, the stable cooperation structure is obtained. Both cooperative virtual machine management and cooperation formation are intertwined, in which the proposed optimization and game models can be used to obtain the solution of the rational organizations to minimize their own costs."
]
}
|
1309.2444
|
2763550103
|
Federations among sets of Cloud Providers (CPs), whereby a set of CPs agree to mutually use their own resources to run the VMs of other CPs, are considered a promising solution to the problem of reducing the energy cost. In this paper, we address the problem of federation formation for a set of CPs, whose solution is necessary to exploit the potential of cloud federations for the reduction of the energy bill. We devise a distributed algorithm, based on cooperative game theory, that allows a set of CPs to cooperatively set up their federations in such a way that their individual profit is increased with respect to the case in which they work in isolation, and we show that, by using our algorithm and the proposed CPs' utility function, they are able to self-organize into Nash-stable federations and, by means of iterated executions, to adapt themselves to environmental changes. Numerical results are presented to demonstrate the effectiveness of the proposed algorithm.
|
In @cite_0 , a profit-maximizing game-based mechanism to enable dynamic cloud federation formation is proposed. The dynamic federation formation problem is modeled as a hedonic game (like our approach), and the federations are computed by means of a merge-split algorithm. There are several important differences between this and our works: (1) we focus on the stability of individuals rather than of groups, (2) we propose a decentralized algorithm, (3) we demonstrate the stability of the obtained federations, and (4) we use the Shapley value instead of the normalized Banzhaf value (as in @cite_0 ), since the latter does not satisfy some important properties @cite_3 .
|
{
"cite_N": [
"@cite_0",
"@cite_3"
],
"mid": [
"2108124928",
"1995179446"
],
"abstract": [
"We model the cloud federation formation problem using concepts from coalitional game theory by considering the cooperation of the cloud providers in providing the requested VM instances. We design a mechanism that enables the cloud providers to dynamically form a cloud federation maximizing their profit. Furthermore, the mechanism guarantees that the cloud federation structure is stable, that is, the cloud providers do not have incentives to break away from the current federation and join some other federation.",
"A cooperative game with transferable utilities - or simply a TU-game - describes a situation in which players can obtain certain payoffs by cooperation. A solution concept for these games is a function which assigns to every such a game a distribution of payoffs over the players in the game. Famous solution concepts for TU-games are the Shapley value and the Banzhaf value. Both solution concepts have been axiomatized in various ways. An important difference between these two solution concepts is the fact that the Shapley value always distributes the payoff that can be obtained by the grand coalition' consisting of all players cooperating together while the Banzhaf value does not satisfy this property, i.e., the Banzhaf value is not efficient. In this paper we consider the normalized Banzhaf value which distributes the payoff that can be obtained by the grand coalition' proportional to the Banzhaf values of the players. This value does not satisfy certain axioms underlying the Banzhaf value. In this paper we introduce some new axioms that characterize the normalized Banzhaf value. We also provide an axiomatization of the Shapley value using similar axioms."
]
}
|
1309.2444
|
2763550103
|
Federations among sets of Cloud Providers (CPs), whereby a set of CPs agree to mutually use their own resources to run the VMs of other CPs, are considered a promising solution to the problem of reducing the energy cost. In this paper, we address the problem of federation formation for a set of CPs, whose solution is necessary to exploit the potential of cloud federations for the reduction of the energy bill. We devise a distributed algorithm, based on cooperative game theory, that allows a set of CPs to cooperatively set up their federations in such a way that their individual profit is increased with respect to the case in which they work in isolation, and we show that, by using our algorithm and the proposed CPs' utility function, they are able to self-organize into Nash-stable federations and, by means of iterated executions, to adapt themselves to environmental changes. Numerical results are presented to demonstrate the effectiveness of the proposed algorithm.
|
In @cite_6 , the problem of sharing unused capacity in a federation of CPs for VM spot market is formulated as a non-cooperative repeated game. Specifically, by using a Markov model to predict future non-spot workload, the authors introduce a set of capacity sharing strategies that maximize the federation's long-term revenue and propose a dynamic programming algorithm to find the allocation rules needed to achieve it. Our work can complement this approach by providing a solution to the formation of CP federations for non-spot VM instances.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2087455320"
],
"abstract": [
"This paper presents a novel economic model to regulate capacity sharing in a federation of hybrid cloud providers (CPs). The proposed work models the interactions among the CPs as a repeated game among selfish players that aim at maximizing their profit by selling their unused capacity in the spot market but are uncertain of future workload fluctuations. The proposed work first establishes that the uncertainty in future revenue can act as a participation incentive to sharing in the repeated game. We, then, demonstrate how an efficient sharing strategy can be obtained via solving a simple dynamic programming problem. The obtained strategy is a simple update rule that depends only on the current workloads and a single variable summarizing past interactions. In contrast to existing approaches, the model incorporates historical and expected future revenue as part of the virtual machine (VM) sharing decision. Moreover, these decisions are not enforced neither by a centralized broker nor by predefined agreements. Rather, the proposed model employs a simple grim trigger strategy where a CP is threatened by the elimination of future VM hosting by other CPs. Simulation results demonstrate the performance of the proposed model in terms of the increased profit and the reduction in the variance in the spot market VM availability and prices."
]
}
|
1309.2394
|
2053054452
|
This paper is concerned with the complexity analysis of constructor term rewrite systems and its ramification in implicit computational complexity. We introduce a path order with multiset status, the polynomial path order POP ∗ , that is applicable in two related, but distinct contexts. On the one hand POP ∗ induces polynomial innermost runtime complexity and hence may serve as a syntactic, and fully automatable, method to analyse the innermost runtime complexity of term rewrite systems. On the other hand POP ∗ provides an order-theoretic characterisation of the polytime computable functions: the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP ∗ .
|
Polynomial complexity analysis is an active research area in rewriting. Starting from @cite_13 interest in polynomial complexity analysis greatly increased over the last years, see for example @cite_5 @cite_56 @cite_24 @cite_21 @cite_53 . This is partly due to the incorporation of a dedicated category for complexity into the annual termination competition (TERMCOMP). http: termcomp.uibk.ac.at .
|
{
"cite_N": [
"@cite_53",
"@cite_21",
"@cite_56",
"@cite_24",
"@cite_5",
"@cite_13"
],
"mid": [
"172021561",
"1913470731",
"1517466677",
"38697619",
"",
"1588175212"
],
"abstract": [
"Matrix interpretations can be used to bound the derivational complexity of term rewrite systems. In particular, triangular matrix interpretations over the natural numbers are known to induce polynomial upper bounds on the derivational complexity of (compatible) rewrite systems. Recently two different improvements were proposed, based on the theory of weighted automata and linear algebra. In this paper we strengthen and unify these improvements by using joint spectral radius theory.",
"In this paper, we present a variant of the dependency pair method for analysing runtime complexities of term rewrite systems automatically. This method is easy to implement, but significantly extends the analytic power of existing direct methods. Our findings extend the class of TRSs whose linear or quadratic runtime complexity can be detected automatically. We provide ample numerical data for assessing the viability of the method.",
"In this paper we introduce a modular framework which allows to infer (feasible) upper bounds on the (derivational) complexity of term rewrite systems by combining different criteria. All current investigations to analyze the derivational complexity are based on a single termination proof, possibly preceded by transformations. We prove that the modular framework is strictly more powerful than the conventional setting. Furthermore, the results have been implemented and experiments show significant gains in power.",
"We present a modular framework to analyze the innermost runtime complexity of term rewrite systems automatically. Our method is based on the dependency pair framework for termination analysis. In contrast to previous work, we developed a direct adaptation of successful termination techniques from the dependency pair framework in order to use them for complexity analysis. By extensive experimental results, we demonstrate the power of our method compared to existing techniques.",
"",
"In this paper we study context dependent interpretations, a semantic termination method extending interpretations over the natural numbers, introduced by Hofbauer. We present two subclasses of context dependent interpretations and establish tight upper bounds on the induced derivational complexities. In particular we delineate a class of interpretations that induces quadratic derivational complexity. Furthermore, we present an algorithm for mechanically proving termination of rewrite systems with context dependent interpretations. This algorithm has been implemented and we present ample numerical data for the assessment of the viability of the method."
]
}
|
1309.2394
|
2053054452
|
This paper is concerned with the complexity analysis of constructor term rewrite systems and its ramification in implicit computational complexity. We introduce a path order with multiset status, the polynomial path order POP ∗ , that is applicable in two related, but distinct contexts. On the one hand POP ∗ induces polynomial innermost runtime complexity and hence may serve as a syntactic, and fully automatable, method to analyse the innermost runtime complexity of term rewrite systems. On the other hand POP ∗ provides an order-theoretic characterisation of the polytime computable functions: the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP ∗ .
|
There are several accounts of predicative analysis of recursion in the (ICC) literature. We mention Marion's ( for short) @cite_26 . The path order provides an order-theoretic characterisation of the class @math and can be also consider as a miniaturisation of of sorts: it is a restriction of and yields an order-theoretic characterisation of a complexity class. On the other hand cannot be used to characterise the (innermost) runtime complexity of TRSs. This follows from Example below. In particular, although @math is compatible with @math , from this we can only conclude that @math is computable on a nondeterministic Turing machine in polynomial time. However, this follows by design as @math is complete for @math . For a precedence @math that fulfils @math and @math we obtain that @math is compatible with @math . However it is straightforward to verify that the family of terms @math admits (innermost) derivations whose length grows exponentially in @math . Still the underlying function can be proven polynomial, essentially relying on memoisation techniques @cite_26 .
|
{
"cite_N": [
"@cite_26"
],
"mid": [
"2072482695"
],
"abstract": [
"We study termination proofs in order to (i) determine computational complexity of programs and (ii) generate efficient programs from the complexity analysis. For this, we construct a termination ordering, called light multiset path ordering (LMPO), which is a restriction of the multiset path ordering. We establish that the class of first order functional programs on lists which is terminating by LMPO characterises exactly the functions computable in polynomial time."
]
}
|
1309.2394
|
2053054452
|
This paper is concerned with the complexity analysis of constructor term rewrite systems and its ramification in implicit computational complexity. We introduce a path order with multiset status, the polynomial path order POP ∗ , that is applicable in two related, but distinct contexts. On the one hand POP ∗ induces polynomial innermost runtime complexity and hence may serve as a syntactic, and fully automatable, method to analyse the innermost runtime complexity of term rewrite systems. On the other hand POP ∗ provides an order-theoretic characterisation of the polytime computable functions: the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP ∗ .
|
Furthermore, a strengthening of our first main theorem to runtime complexity can be obtained if one considers polynomial interpretations, where the interpretations of constructor symbols is restricted. Such restricted polynomial interpretations are called in @cite_27 . Note that additive polynomial interpretations also characterise the functions computable in polytime @cite_27 . Similarly, @cite_47 provide an elegant way to characterise time complexity classes through a combination of syntactic (via restrictions of reduction orders) and semantic (via quasi-interpretations) considerations. To date it is unknown whether quasi-interpretations can be used to assess polynomial runtime complexity of TRSs. Unarguable these semantic techniques admit a better intensionality than the syntactic characterisation provided through the path order . But semantic methods are notoriously difficult to implement efficiently in an automated setting. In particular we are only aware of one accessible implementation of quasi-interpretations, our own @cite_15 . Note that these semantic methods are not tailored for innermost rewriting, in particular Example given below cannot be handled, while it can be easily handled by .
|
{
"cite_N": [
"@cite_27",
"@cite_47",
"@cite_15"
],
"mid": [
"2055717180",
"2025130727",
"86123047"
],
"abstract": [
"We study the effect of polynomial interpretation termination proofs of deterministic (resp. non-deterministic) algorithms defined by con uent (resp. non-con uent) rewrite systems over data structures which include strings, lists and trees, and we classify them according to the interpretations of the constructors. This leads to the definition of six function classes which turn out to be exactly the deterministic (resp. non-deterministic) polynomial time, linear exponential time and linear doubly exponential time computable functions when the class is based on con uent (resp. non-con uent) rewrite systems. We also obtain a characterisation of the linear space computable functions. Finally, we demonstrate that functions with exponential interpretation termination proofs are super-elementary.",
"This paper presents in a reasoned way our works on resource analysis by quasi-interpretations. The controlled resources are typically the runtime, the runspace or the size of a result in a program execution. Quasi-interpretations allow the analysis of system complexity. A quasi-interpretation is a numerical assignment, which provides an upper bound on computed functions and which is compatible with the program operational semantics. The quasi-interpretation method offers several advantages: (i) It provides hints in order to optimize an execution, (ii) it gives resource certificates, and (iii) finding quasi-interpretations is decidable for a broad class which is relevant for feasible computations. By combining the quasi-interpretation method with termination tools (here term orderings), we obtained several characterizations of complexity classes starting from Ptime and Pspace.",
"Recent studies have provided many characterisations of the class of polynomial time computable functions through term rewriting techniques. In this paper we describe a (fully automatic and command-line based) system that implements the majority of these techniques and present experimental findings to simplify comparisons."
]
}
|
1309.2394
|
2053054452
|
This paper is concerned with the complexity analysis of constructor term rewrite systems and its ramification in implicit computational complexity. We introduce a path order with multiset status, the polynomial path order POP ∗ , that is applicable in two related, but distinct contexts. On the one hand POP ∗ induces polynomial innermost runtime complexity and hence may serve as a syntactic, and fully automatable, method to analyse the innermost runtime complexity of term rewrite systems. On the other hand POP ∗ provides an order-theoretic characterisation of the polytime computable functions: the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP ∗ .
|
Although we consider here only time complexity, related work indicates that the overall approach is general enough to reason also about space complexity. For instance, the order @cite_36 can be miniaturised to characterise linear space @cite_37 . Likewise, @cite_55 provide a semantic technique capable of characterising polynomial space.
|
{
"cite_N": [
"@cite_36",
"@cite_55",
"@cite_37"
],
"mid": [
"1583295953",
"2033656559",
"1506380991"
],
"abstract": [
"Preface 1. Motivating examples 2. Abstract reduction systems 3. Universal algebra 4. Equational problems 5. Termination 6. Confluence 7. Completion 8. Grobner bases and Buchberger's algorithm 9. Combination problems 10. Equational unification 11. Extensions Appendix 1. Ordered sets Appendix 2. A bluffer's guide to ML Bibliography Index.",
"The sup-interpretation method is proposed as a new tool to control memory resources of first order functional programs with pattern matching by static analysis. It has been introduced in order to increase the intensionality, that is the number of captured algorithms, of a previous method, the quasi-interpretations. Basically, a sup-interpretation provides an upper bound on the size of function outputs. A criterion, which can be applied to terminating as well as nonterminating programs, is developed in order to bound the stack frame size polynomially. Since this work is related to quasi-interpretation, dependency pairs, and size-change principle methods, we compare these notions obtaining several results. The first result is that, given any program, we have heuristics for finding a sup-interpretation when we consider polynomials of bounded degree. Another result consists in the characterizations of the sets of functions computable in polynomial time and in polynomial space. A last result consists in applications of sup-interpretations to the dependency pair and the size-change principle methods.",
"We study three different space complexity classes: LINSPACE, PSPACE, and ESPACE and give complete characterisations for these classes. We employ rewrite systems, whose termination can be shown by Knuth Bendix orders. To capture LINSPACE, we consider positively weighted Knuth Bendix orders. To capture PSPACE, we consider unary rewrite systems, compatible with a Knuth Bendix order, where we allow for padding of the input. And to capture ESPACE, we make use of a non-standard generalisation of the Knuth Bendix order."
]
}
|
1309.2394
|
2053054452
|
This paper is concerned with the complexity analysis of constructor term rewrite systems and its ramification in implicit computational complexity. We introduce a path order with multiset status, the polynomial path order POP ∗ , that is applicable in two related, but distinct contexts. On the one hand POP ∗ induces polynomial innermost runtime complexity and hence may serve as a syntactic, and fully automatable, method to analyse the innermost runtime complexity of term rewrite systems. On the other hand POP ∗ provides an order-theoretic characterisation of the polytime computable functions: the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP ∗ .
|
@cite_22 , Beckmann and Weiermann have given a term rewriting characterisation of the principle of predicative recursion proposed by Bellantoni and Cook. Following ideas proposed by Cichon and Weiermann in @cite_31 , Beckmann and Weiermann thus reobtain Bellantoni's result that predicative recursion is closed under parameter recursion.
|
{
"cite_N": [
"@cite_31",
"@cite_22"
],
"mid": [
"1996792588",
"2075296484"
],
"abstract": [
"Abstract The termination of rewrite systems for parameter recursion, simple nested recursion and unnested multiple recursion is shown by using monotone interpretations both on the ordinals below the first primitive recursively closed ordinal and on the natural numbers. We show that the resulting derivation lengths are primitive recursive. As a corollary we obtain transparent and illuminating proofs of the facts that the schemata of parameter recursion, simple nested recursion and unnested multiple recursion lead from primitive recursive functions to primitive recursive functions.",
"A natural term rewriting framework for the Bellantoni Cook schemata of predicative recursion, which yields a canonical definition of the polynomial time computable functions, is introduced. In terms of an exponential function both, an upper bound and a lower bound are proved for the resulting derivation lengths of the functions in question. It is proved that any natural reduction strategy yields an algorithm which runs in exponential time. We give an example in which this estimate is tight. It is proved that the resulting derivation lengths become polynomially bounded in the lengths of the inputs if the rewrite rules are only applied to terms in which the safe arguments – no restrictions are assumed for the normal arguments – consist of values, i.e. numerals, and not of names, i.e. non numeral terms. It is proved that in the latter situation any inside first reduction strategy and any head reduction strategy yield algorithms, for the function under consideration, for which the running time is bounded by an appropriate polynomial in the lengths of the input. A feasible rewrite system for predicative recursion with predicative parameter substitution is defined. It is proved that the derivation lengths of this rewrite system are polynomially bounded in the lengths of the inputs. As a corollary we reobtain Bellantoni’s result stating that predicative recursion is closed under predicative parameter recursion."
]
}
|
1309.2394
|
2053054452
|
This paper is concerned with the complexity analysis of constructor term rewrite systems and its ramification in implicit computational complexity. We introduce a path order with multiset status, the polynomial path order POP ∗ , that is applicable in two related, but distinct contexts. On the one hand POP ∗ induces polynomial innermost runtime complexity and hence may serve as a syntactic, and fully automatable, method to analyse the innermost runtime complexity of term rewrite systems. On the other hand POP ∗ provides an order-theoretic characterisation of the polytime computable functions: the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP ∗ .
|
We have extended our complexity analysis tool @cite_51 with polynomial path orders. We briefly contrast this implementation to related tools for the static resource analysis of programs. Hoffmann et al. @cite_32 provide an automatic multivariate amortised cost analysis exploiting typing, which extends earlier results on amortised cost analysis @cite_8 . To indicate the applicability of our method we have employed a straightforward (and complexity preserving) transformation of the RAML programs considered in @cite_32 @cite_0 into TRSs. Equipped with our complexity analyser can handle all examples from @cite_32 . @cite_57 present an automated complexity tool for Java @math Bytecode programs, @cite_6 give a complexity and termination analysis for flowchart programs, and Gulwani et al. @cite_50 as well as Zuleger et al. @cite_3 provide an automated complexity tool for C programs. Very recently Hofmann and Rodriguez proposed in @cite_40 an automated resource analysis for object-oriented programs via an amortised cost analysis.
|
{
"cite_N": [
"@cite_8",
"@cite_32",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_57",
"@cite_40",
"@cite_50",
"@cite_51"
],
"mid": [
"1987927249",
"2137062096",
"1523037784",
"1515621470",
"30585784",
"2120647256",
"11240799",
"2109863363",
"2278369388"
],
"abstract": [
"A powerful technique in the complexity analysis of data structures is amortization, or averaging over time. Amortized running time is a realistic but robust complexity measure for which we can obtain surprisingly tight upper and lower bounds on a variety of algorithms. By following the principle of designing algorithms whose amortized complexity is low, we obtain “self-adjusting” data structures that are simple, flexible and efficient. This paper surveys recent work by several researchers on amortized complexity.",
"We study the problem of automatically analyzing the worst-case resource usage of procedures with several arguments. Existing automatic analyses based on amortization, or sized types bound the resource usage or result size of such a procedure by a sum of unary functions of the sizes of the arguments. In this paper we generalize this to arbitrary multivariate polynomial functions thus allowing bounds of the form mn which had to be grossly overestimated by m2+n2 before. Our framework even encompasses bounds like ∗i,j≤n m_i mj where the mi are the sizes of the entries of a list of length n. This allows us for the first time to derive useful resource bounds for operations on matrices that are represented as lists of lists and to considerably improve bounds on other super-linear operations on lists such as longest common subsequence and removal of duplicates from lists of lists. Furthermore, resource bounds are now closed under composition which improves accuracy of the analysis of composed programs when some or all of the components exhibit super-linear resource or size behavior. The analysis is based on a novel multivariate amortized resource analysis. We present it in form of a type system for a simple first-order functional language with lists and trees, prove soundness, and describe automatic type inference based on linear programming. We have experimentally validated the automatic analysis on a wide range of examples from functional programming with lists and trees. The obtained bounds were compared with actual resource consumption. All bounds were asymptotically tight, and the constants were close or even identical to the optimal ones.",
"Proving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Previous algorithms based on affine rankings either are applicable only to simple loops (i.e., single-node flowcharts) and rely on enumeration, or are not complete in the sense that they are not guaranteed to find a ranking in the class of functions they consider, if one exists. Our first contribution is to propose an efficient algorithm to compute ranking functions: It can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and our method, although greedy, is provably complete. Our second contribution is to show how to use the ranking functions we generate to get upper bounds for the computational complexity (number of transitions) of the source program. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. We applied the method on a collection of test cases from the literature. We also show the links and differences with previous techniques based on the insertion of counters.",
"The size-change abstraction (SCA) is an important program abstraction for termination analysis, which has been successfully implemented in many tools for functional and logic programs. In this paper, we demonstrate that SCA is also a highly effective abstract domain for the bound analysis of imperative programs. We have implemented a bound analysis tool based on SCA for imperative programs. We abstract programs in a pathwise and context dependent manner, which enables our tool to analyze real-world programs effectively. Our work shows that SCA captures many of the essential ideas of previous termination and bound analysis and goes beyond in a conceptually simpler framework.",
"The automatic determination of the quantitative resource consumption of programs is a classic research topic which has many applications in software development. Recently, we developed a novel multivariate amortized resource analysis that automatically computes polynomial resource bounds for first-order functional programs. In this tool paper, we describe Resource Aware ML (RAML), a functional programming language that implements our analysis. Other than in earlier articles, we focus on the practical aspects of the implementation. We describe the syntax of RAML, the code transformation prior to the analysis, the web interface, the output of the analysis, and the results of our experiments with the analysis of example programs.",
"COSTA is a static analyzer for Java bytecode which is able to infer cost and termination information for large classes of programs. The analyzer takes as input a program and a resource of interest, in the form of a cost model, and aims at obtaining an upper bound on the execution cost with respect to the resource and at proving program termination. The costa system has reached a considerable degree of maturity in that (1) it includes state-of-the-art techniques for statically estimating the resource consumption and the termination behavior of programs, plus a number of specialized techniques which are required for achieving accurate results in the context of object-oriented programs, such as handling numeric fields in value analysis; (2) it provides several nontrivial notions of cost (resource consumption) including, in addition to the number of execution steps, the amount of memory allocated in the heap or the number of calls to some user-specified method; (3) it provides several user interfaces: a classical command line, a Web interface which allows experimenting remotely with the system without the need of installing it locally, and a recently developed Eclipse plugin which facilitates the usage of the analyzer, even during the development phase; (4) it can deal with both the Standard and Micro editions of Java. In the tool demonstration, we will show that costa is able to produce meaningful results for non-trivial programs, possibly using Java libraries. Such results can then be used in many applications, including program development, resource usage certification, program optimization, etc.",
"We present a fully automatic, sound and modular heap-space analysis for object-oriented programs. In particular, we provide type inference for the system of refinement types RAJA, which checks upper bounds of heap-space usage based on amortised analysis. Until now, the refined RAJA types had to be manually specified. Our type inference increases the usability of the system, as no user-defined annotations are required. The type inference consists of constraint generation and solving. First, we present a system for generating subtyping and arithmetic constraints based on the RAJA typing rules. Second, we reduce the subtyping constraints to inequalities over infinite trees, which can be solved using an algorithm that we have described in previous work. This paper also enriches the original type system by introducing polymorphic method types, enabling a modular analysis.",
"This paper describes an inter-procedural technique for computing symbolic bounds on the number of statements a procedure executes in terms of its scalar inputs and user-defined quantitative functions of input data-structures. Such computational complexity bounds for even simple programs are usually disjunctive, non-linear, and involve numerical properties of heaps. We address the challenges of generating these bounds using two novel ideas. We introduce a proof methodology based on multiple counter instrumentation (each counter can be initialized and incremented at potentially multiple program locations) that allows a given linear invariant generation tool to compute linear bounds individually on these counter variables. The bounds on these counters are then composed together to generate total bounds that are non-linear and disjunctive. We also give an algorithm for automating this proof methodology. Our algorithm generates complexity bounds that are usually precise not only in terms of the computational complexity, but also in terms of the constant factors. Next, we introduce the notion of user-defined quantitative functions that can be associated with abstract data-structures, e.g., length of a list, height of a tree, etc. We show how to compute bounds in terms of these quantitative functions using a linear invariant generation tool that has support for handling uninterpreted functions. We show application of this methodology to commonly used data-structures (namely lists, list of lists, trees, bit-vectors) using examples from Microsoft product code. We observe that a few quantitative functions for each data-structure are usually sufficient to allow generation of symbolic complexity bounds of a variety of loops that iterate over these data-structures, and that it is straightforward to define these quantitative functions. The combination of these techniques enables generation of precise computational complexity bounds for real-world examples (drawn from Microsoft product code and C++ STL library code) for some of which it is non-trivial to even prove termination. Such automatically generated bounds are very useful for early detection of egregious performance problems in large modular codebases that are constantly being changed by multiple developers who make heavy use of code written by others without a good understanding of their implementation complexity.",
"The Tyrolean Complexity Tool, TCT for short, is an open source complexity analyser for term rewrite systems. Our tool TCT features a majority of the known techniques for the automated characterisation of polynomial complexity of rewrite systems and can investigate derivational and runtime complexity, for full and innermost rewriting. This system description outlines features and provides a short introduction to the usage of TCT."
]
}
|
1309.2712
|
2107963114
|
A passive adversary can eavesdrop stored content or downloaded content of some storage nodes, in order to learn illegally about the file stored across a distributed storage system (DSS). Previous work in the literature focuses on code constructions that trade storage capacity for perfect security. In other words, by decreasing the amount of original data that it can store, the system can guarantee that the adversary, which eavesdrops up to a certain number of storage nodes, obtains no information (in Shannon's sense) about the original data. In this work we introduce the concept of block security for DSS and investigate minimum bandwidth regenerating (MBR) codes that are block secure against adversaries of varied eavesdropping strengths. Such MBR codes guarantee that no information about any group of original data units up to a certain size is revealed, without sacrificing the storage capacity of the system. The size of such secure groups varies according to the number of nodes that the adversary can eavesdrop. We show that code constructions based on Cauchy matrices provide block security. The opposite conclusion is drawn for codes based on Vandermonde matrices.
|
Hereafter we follow the standard notation in the regenerating code literature (see ). In their pioneering work, Dimakis @cite_25 established that the maximum file size to be stored in a DSS @math must satisfy the following inequality A construction of optimal regenerating codes with exact repair at the MBR point for all @math was proposed by Rashmi @cite_22 . When the stored contents of some @math nodes are observed by an adversary Eve, Pawar @cite_9 @cite_27 showed that the maximum file size to be stored satisfies provided that Eve gains (in Shannon's sense) about the file. This type of security is often referred to as or in the literature. The parameter @math is called the . The authors @cite_9 @cite_27 also provided an optimal code construction, based on complete graphs, for the case @math that attains the bound ). We later argue that an extension of their construction based on regular graphs @cite_15 also produces optimal perfectly secure codes for all @math with @math even (see Remark ). Under the same adversary model, Shah @cite_7 constructed optimal codes that attain the bound ) for all @math . The authors used product-matrix codes in their construction.
|
{
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_9",
"@cite_27",
"@cite_15",
"@cite_25"
],
"mid": [
"",
"2063554152",
"2156663229",
"2118925326",
"2949150884",
"2951800112"
],
"abstract": [
"",
"Regenerating codes are a class of codes for distributed storage networks that provide reliability and availability of data, and also perform efficient node repair. Another important aspect of a distributed storage network is its security. In this paper, we consider a threat model where an eavesdropper may gain access to the data stored in a subset of the storage nodes, and possibly also, to the data downloaded during repair of some nodes. We provide explicit constructions of regenerating codes that achieve information-theoretic secrecy capacity in this setting.",
"We address the problem of securing distributed storage systems against adversarial node attacks. An important aspect of these systems is node failures over time, necessitating, thus, a repair mechanism in order to maintain a desired high system reliability. In such dynamic settings, an important security problem is to safeguard the system from a malicious adversary who may come at different time instances during the lifetime of the storage system to corrupt the data stored on some nodes. We provide upper bounds on the maximum amount of information that can be stored safely on the system in the presence of the adversary. For an important operating regime, which we call the bandwidth-limited regime, we show that our upper bounds are tight and provide explicit linear code constructions. Moreover, we provide a way to shortlist the malicious nodes and expurgate the system.",
"We address the problem of securing distributed storage systems against eavesdropping and adversarial attacks. An important aspect of these systems is node failures over time, necessitating, thus, a repair mechanism in order to maintain a desired high system reliability. In such dynamic settings, an important security problem is to safeguard the system from an intruder who may come at different time instances during the lifetime of the storage system to observe and possibly alter the data stored on some nodes. In this scenario, we give upper bounds on the maximum amount of information that can be stored safely on the system. For an important operating regime of the distributed storage system, which we call the bandwidth-limited regime, we show that our upper bounds are tight and provide explicit code constructions. Moreover, we provide a way to short list the malicious nodes and expurgate the system.",
"We introduce a new class of exact Minimum-Bandwidth Regenerating (MBR) codes for distributed storage systems, characterized by a low-complexity uncoded repair process that can tolerate multiple node failures. These codes consist of the concatenation of two components: an outer MDS code followed by an inner repetition code. We refer to the inner code as a Fractional Repetition code since it consists of splitting the data of each node into several packets and storing multiple replicas of each on different nodes in the system. Our model for repair is table-based, and thus, differs from the random access model adopted in the literature. We present constructions of Fractional Repetition codes based on regular graphs and Steiner systems for a large set of system parameters. The resulting codes are guaranteed to achieve the storage capacity for random access repair. The considered model motivates a new definition of capacity for distributed storage systems, that we call Fractional Repetition capacity. We provide upper bounds on this capacity while a precise expression remains an open problem.",
"Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a node failure is for a new node to download subsets of data stored at a number of surviving nodes, reconstruct a lost coded block using the downloaded data, and store it at the new node. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to download of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff."
]
}
|
1309.1732
|
2952352150
|
We are given a set of @math jobs and a single processor that can vary its speed dynamically. Each job @math is characterized by its processing requirement (work) @math , its release date @math and its deadline @math . We are also given a budget of energy @math and we study the scheduling problem of maximizing the throughput (i.e. the number of jobs which are completed on time). We propose a dynamic programming algorithm that solves the preemptive case of the problem, i.e. when the execution of the jobs may be interrupted and resumed later, in pseudo-polynomial time. Our algorithm can be adapted for solving the weighted version of the problem where every job is associated with a weight @math and the objective is the maximization of the sum of the weights of the jobs that are completed on time. Moreover, we provide a strongly polynomial time algorithm to solve the non-preemptive unweighed case when the jobs have the same processing requirements. For the weighted case, our algorithm can be adapted for solving the non-preemptive version of the problem in pseudo-polynomial time.
|
Different variants of throughput maximization in the speed scaling context have been studied in the literature (see @cite_3 @cite_12 @cite_1 @cite_6 @cite_4 @cite_7 ), but in what follows we focus on the complexity status of the offline case.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_12"
],
"mid": [
"1986373871",
"2970334256",
"2163035318",
"",
"1531167583",
"2162654297"
],
"abstract": [
"Dynamic Voltage Scaling techniques allow the processor to set its speed dynamically in order to reduce energy consumption. It was shown that if the processor can run at arbitrary speeds and uses power s^@a when running at speed s, the online heuristic AVR has a competitive ratio (2@a)^@a 2. In this paper we first study the online heuristics for the discrete model where the processor can only run at d given speeds. We propose a method to transform online heuristic AVR to an online heuristic for the discrete model and prove a competitive ratio 2^@a^-^1(@a-1)^@a^-^1(@d^@a-1)^@a(@d-1)(@d^@a-@d)^@a^-^1+1, where @d is the maximum ratio between adjacent non-zero speed levels. We also prove that the analysis holds for a class of heuristics that satisfy certain natural properties. We further study the throughput maximization problem when there is an upper bound for the maximum speed. We propose a greedy algorithm with running time O(n^2logn) and prove that the output schedule is a 3-approximation of the throughput and a (@a-1)^@a^-^1(3^@a-1)^@a2@a^@a(3^@a^-^1-1)^@a^-^1-approximation of the energy consumption.",
"",
"The past few years have witnessed different scheduling algorithms for a processor that can manage its energy usage by scaling dynamically its speed. In this paper we attempt to extend such work to the two-processor setting. Specifically, we focus on deadline scheduling and study online algorithms for two processors with an objective of maximizing the throughput, while using the smallest possible energy. The motivation comes from the fact that dual-core processors are getting common nowadays. Our first result is a new analysis of the energy usage of the speed function OA [15,4,8] with respect to the optimal two-processor schedule. This immediately implies a trivial two-processor algorithm that is 16-competitive for throughput and O(1)-competitive for energy. A more interesting result is a new online strategy for selecting jobs for the two processors. Together with OA, it improves the competitive ratio for throughput from 16 to 3, while increasing that for energy by a factor of 2. Note that even if the energy usage is not a concern, no algorithm can be better than 2-competitive with respect to throughput.",
"",
"Existing work on scheduling with energy concern has focused on minimizing the energy for completing all jobs or achieving maximum throughput [19, 2,7,13,14]. That is, energy usage is a secondary concern when compared to throughput and the schedules targeted may be very poor in energy efficiency. In this paper, we attempt to put energy efficiency as the primary concern and study how to maximize throughput subject to a user-defined threshold of energy efficiency. We first show that all deterministic online algorithms have a competitive ratio at least Δ, where Δ is the max-min ratio of job size. Nevertheless, allowing the online algorithm to have a slightly poorer energy efficiency leads to constant (i.e., independent of Δ) competitive online algorithm. On the other hand, using randomization, we can reduce the competitive ratio to O(logΔ) without relaxing the efficiency threshold. Finally we consider a special case where no jobs are \"demanding\" and give a deterministic online algorithm with constant competitive ratio for this case.",
"We consider online scheduling algorithms in the dynamic speedscaling model, where a processor can scale its speed between 0 andsome maximum speed T. The processor uses energy at ratesαwhen run at speed s,where α> 1 is a constant. Most modern processorsuse dynamic speed scaling to manage their energy usage. This leadsto the problem of designing execution strategies that are bothenergy efficient, and yet have almost optimum performance. We consider two problems in this model and give essentiallyoptimum possible algorithms for them. In the first problem, jobswith arbitrary sizes and deadlines arrive online and the goal is tomaximize the throughput, i.e. the total size of jobs completedsuccessfully. We give an algorithm that is 4-competitive forthroughput and O(1)-competitive for the energy used. Thisimproves upon the 14 throughput competitive algorithm of . [10]. Our throughput guarantee is optimal as any onlinealgorithm must be at least 4-competitive even if the energy concernis ignored [7]. In the second problem, we consider optimizing thetrade-off between the total flow time incurred and the energyconsumed by the jobs. We give a 4-competitive algorithm to minimizetotal flow time plus energy for unweighted unit size jobs, and a (2+ o(1)) α ln α-competitivealgorithm to minimize fractional weighted flow time plus energy.Prior to our work, these guarantees were known only when theprocessor speed was unbounded (T= ∞) [4]."
]
}
|
1309.1785
|
2083484226
|
Online social networks are known to be demographically biased. Currently there are questions about what degree of representativity of the physical population they have, and how population biases impact user-generated content. In this paper we focus on centralism, a problem affecting Chile. Assuming that local differences exist in a country, in terms of vocabulary, we built a methodology based on the vector space model to find distinctive content from different locations, and used it to create classifiers to predict whether the content of a micro-post is related to a particular location, having in mind a geographically diverse selection of micro-posts. We evaluate them in a case study where we analyze the virtual population of Chile that participated in the Twitter social network during an event of national relevance: the municipal (local governments) elections held in 2012. We observe that the participating virtual population is spatially representative of the physical population, implying that there is centralism in Twitter. Our classifiers out-perform a non geographically-diverse baseline at the regional level, and have the same accuracy at a provincial level. However, our approach makes assumptions that need to be tested in multi-thematic and more general datasets. We leave this for future work.
|
There is no clear answer to the question @cite_7 . However, a wide spectrum of research areas have seen it from different perspectives. One of them is the geographical span of networks: previous work has found that, the stronger the network (defined in terms of reciprocity in connections, 1-way and 2-way interactions by mentioning others), the lower is its geographical span @cite_12 . In terms of discussion, local events have more dense networks of discussion than global events, and central individuals in the network are also located centrally in the physical world @cite_16 . A demographic study of user accounts from the the U.S.A. concluded that populated cities are over-represented, while less populated cities are under-represented in Twitter @cite_1 .
|
{
"cite_N": [
"@cite_16",
"@cite_1",
"@cite_12",
"@cite_7"
],
"mid": [
"1580639819",
"2167102709",
"1662284448",
"2101196063"
],
"abstract": [
"This paper examines tweets about two geographically local events—a shooting and a building collapse—that took place in Wichita, Kansas and Atlanta, Georgia, respectively. Most Internet research has focused on examining ways the Internet can connect people across long distances, yet there are benefits to being connected to others who are nearby. People in close geographic proximity can provide real-time information and eyewitness updates for one another about events of local interest. We first show a relationship between structural properties in the Twitter network and geographic properties in the physical world. We then describe the role of mainstream news in disseminating local information. Last, we present a poll of 164 users’ information seeking practices. We conclude with practical and theoretical implications for sharing information in local communities.",
"Every second, the thoughts and feelings of millions of people across the world are recorded in the form of 140-character tweets using Twitter. However, despite the enormous potential presented by this remarkable data source, we still do not have an understanding of the Twitter population itself: Who are the Twitter users? How representative of the overall population are they? In this paper, we take the first steps towards answering these questions by analyzing data on a set of Twitter users representing over 1 of the U.S. population. We develop techniques that allow us to compare the Twitter population to the U.S. population along three axes (geography, gender, and race ethnicity), and find that the Twitter population is a highly non-uniform sample of the population.",
"Debate is open as to whether social media communities resemble real-life communities, and to what extent. We contribute to this discussion by testing whether established sociological theories of real-life networks hold in Twitter. In particular, for 228,359 Twitter profiles, we compute network metrics (e.g., reciprocity, structural holes, simmelian ties) that the sociological literature has found to be related to parts of one's social world (i.e., to topics, geography and emotions), and test whether these real-life associations still hold in Twitter. We find that, much like individuals in real-life communities, social brokers (those who span structural holes) are opinion leaders who tweet about diverse topics, have geographically wide networks, and express not only positive but also negative emotions. Furthermore, Twitter users who express positive (negative) emotions cluster together, to the extent of having a correlation coefficient between one's emotions and those of friends as high as 0.45. Understanding Twitter's social dynamics does not only have theoretical implications for studies of social networks but also has practical implications, including the design of self-reflecting user interfaces that make people aware of their emotions, spam detection tools, and effective marketing campaigns.",
"Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it."
]
}
|
1309.2080
|
2086351973
|
Learning probabilistic logic programming languages is receiving an increasing attention, and systems are available for learning the parameters (PRISM, LeProbLog, LFI-ProbLog and EMBLEM) or both structure and parameters (SEM-CP-logic and SLIPCASE) of these languages. In this paper we present the algorithm SLIPCOVER for “Structure LearnIng of Probabilistic logic programs by searChing OVER the clause space.” It performs a beam search in the space of probabilistic clauses and a greedy search in the space of theories using the log likelihood of the data as the guiding heuristics. To estimate the log likelihood, SLIPCOVER performs Expectation Maximization with EMBLEM. The algorithm has been tested on five real world datasets and compared with SLIPCASE, SEM-CP-logic, Aleph and two algorithms for learning Markov Logic Networks (Learning using Structural Motifs (LSM) and ALEPH++ExactL1). SLIPCOVER achieves higher areas under the precision-recall and receiver operating characteristic curves in most cases.
|
salgorithm is an evolution'' of SLIPCASE @cite_36 in terms of search strategy. SLIPCASE is based on a simple search strategy that refines LPAD theories by trying all possible theory revisions. SLIPCOVER instead uses bottom clauses to guide the refinement process, thus reducing the number of revisions and exploring more effectively the search space. Moreover, SLIPCOVER separates the search for promising clauses from that of the theory. By means of these modifications we have been able to get better final theories in terms of LL with respect to SLIPCASE, as shown in Section . In the following we highlight in detail the differences of the two algorithms.
|
{
"cite_N": [
"@cite_36"
],
"mid": [
"2583189786"
],
"abstract": [
"Probabilistic inductive logic programming, sometimes also called statistical relational learning, addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with first order logic representations and machine learning. A rich variety of different formalisms and learning techniques have been developed. In the present paper, we start from inductive logic programming and sketch how it can be extended with probabilistic methods. More precisely, we outline three classical settings for inductive logic programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and show how they can be used to learn different types of probabilistic representations."
]
}
|
1309.2080
|
2086351973
|
Learning probabilistic logic programming languages is receiving an increasing attention, and systems are available for learning the parameters (PRISM, LeProbLog, LFI-ProbLog and EMBLEM) or both structure and parameters (SEM-CP-logic and SLIPCASE) of these languages. In this paper we present the algorithm SLIPCOVER for “Structure LearnIng of Probabilistic logic programs by searChing OVER the clause space.” It performs a beam search in the space of probabilistic clauses and a greedy search in the space of theories using the log likelihood of the data as the guiding heuristics. To estimate the log likelihood, SLIPCOVER performs Expectation Maximization with EMBLEM. The algorithm has been tested on five real world datasets and compared with SLIPCASE, SEM-CP-logic, Aleph and two algorithms for learning Markov Logic Networks (Learning using Structural Motifs (LSM) and ALEPH++ExactL1). SLIPCOVER achieves higher areas under the precision-recall and receiver operating characteristic curves in most cases.
|
SLIPCASE performs a beam search in the space of theories, starting from a trivial LPAD and using the LL of the data as the guiding heuristics. The starting theory for the beam search is user-defined: a good starting point is a theory composed of one probabilistic clause with empty body of the form for each target predicate, where @math is a tuple of variables. At each step of the search the theory with the highest LL is removed from the beam and a set of refinements is generated and evaluated by means of LL; then they are inserted in order of decreasing LL in the beam. The refinements of the selected theory are constructed according to a language bias based on and declarations in Progol style. Following @cite_12 @cite_30 the admitted refinements are: adding or removing a literal from a clause, adding a clause with an empty body or removing a clause. Beam search ends when one of the following occurs: the maximum number of steps is reached, the beam is empty, the difference between the LL of the current theory and the best previous LL drops below a threshold @math .
|
{
"cite_N": [
"@cite_30",
"@cite_12"
],
"mid": [
"2162152259",
"2165520550"
],
"abstract": [
"Knowledge acquisition is a difficult, error-prone, and time-consuming task. The task of automatically improving an existing knowledge base using learning methods is addressed by the class of systems performing theory refinement. This paper presents a system, forte (First-Order Revision of Theories from Examples), which refines first-order Horn-clause theories by integrating a variety of different revision techniques into a coherent whole. FORTE uses these techniques within a hill-climbing framework, guided by a global heuristic. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including prepositional theory refinement, first-order induction, and inverse resolution. FORTE is demonstrated in several domains, including logic programming and qualitative modelling.",
"Ourston, D. and R.J. Mooney, Theory refinement combining analytical and empirical methods, Artificial Intelligence 66 (1994) 273-309. This article describes a comprehensive system for automatic theory (knowledge base) refinement. The system applies to classification tasks employing a propositional Hornclause domain theory. Given an imperfect domain theory and a set of training examples, the approach uses partial and incorrect proofs to identify potentially faulty rules. For each faulty rule, subsets of examples are used to inductively generate a correction. Because the system starts with an approximate domain theory, fewer training examples are generally required to attain a given level of classification accuracy compared to a purely empirical learning system. The system has been tested in two previously explored application domains: recognizing important classes of DNA sequences and diagnosing diseased soybean plants."
]
}
|
1309.2080
|
2086351973
|
Learning probabilistic logic programming languages is receiving an increasing attention, and systems are available for learning the parameters (PRISM, LeProbLog, LFI-ProbLog and EMBLEM) or both structure and parameters (SEM-CP-logic and SLIPCASE) of these languages. In this paper we present the algorithm SLIPCOVER for “Structure LearnIng of Probabilistic logic programs by searChing OVER the clause space.” It performs a beam search in the space of probabilistic clauses and a greedy search in the space of theories using the log likelihood of the data as the guiding heuristics. To estimate the log likelihood, SLIPCOVER performs Expectation Maximization with EMBLEM. The algorithm has been tested on five real world datasets and compared with SLIPCASE, SEM-CP-logic, Aleph and two algorithms for learning Markov Logic Networks (Learning using Structural Motifs (LSM) and ALEPH++ExactL1). SLIPCOVER achieves higher areas under the precision-recall and receiver operating characteristic curves in most cases.
|
Previous works on learning the structure of probabilistic logic programs include @cite_64 , that proposed a scheme for learning both the probabilities and the structure of Bayesian logic programs by combining techniques from the learning from interpretations setting of ILP with score-based techniques for learning Bayesian networks. We share with this approach the scoring function, the LL of the data given a candidate structure and the greedy search in the space of structures.
|
{
"cite_N": [
"@cite_64"
],
"mid": [
"1523817461"
],
"abstract": [
"Bayesian logic programs tightly integrate definite logic programs with Bayesian networks in order to incorporate the notions of objects and relations into Bayesian networks. They establish a one-to-one mapping between ground atoms and random variables, and between the immediate consequence operator and the directly influenced by relation. In doing so, they nicely separate the qualitative (i.e. logical) component from the quantitative (i.e. the probabilistic) one providing a natural framework to describe general, probabilistic dependencies among sets of random variables. In this chapter, we present results on combining Inductive Logic Programming with Bayesian networks to learn both the qualitative and the quantitative components of Bayesian logic programs from data. More precisely, we show how the qualitative components can be learned by combining the inductive logic programming setting learning from interpretations with score-based techniques for learning Bayesian networks. The estimation of the quantitative components is reduced to the corresponding problem of (dynamic) Bayesian networks."
]
}
|
1309.2080
|
2086351973
|
Learning probabilistic logic programming languages is receiving an increasing attention, and systems are available for learning the parameters (PRISM, LeProbLog, LFI-ProbLog and EMBLEM) or both structure and parameters (SEM-CP-logic and SLIPCASE) of these languages. In this paper we present the algorithm SLIPCOVER for “Structure LearnIng of Probabilistic logic programs by searChing OVER the clause space.” It performs a beam search in the space of probabilistic clauses and a greedy search in the space of theories using the log likelihood of the data as the guiding heuristics. To estimate the log likelihood, SLIPCOVER performs Expectation Maximization with EMBLEM. The algorithm has been tested on five real world datasets and compared with SLIPCASE, SEM-CP-logic, Aleph and two algorithms for learning Markov Logic Networks (Learning using Structural Motifs (LSM) and ALEPH++ExactL1). SLIPCOVER achieves higher areas under the precision-recall and receiver operating characteristic curves in most cases.
|
DBLP:conf ilp RaedtT10 DBLP:conf ilp RaedtT10 introduced the probabilistic rule learner ProbFOIL, which combines the rule learner FOIL @cite_21 with ProbLog @cite_42 . Logical rules are learned from probabilistic data in the sense that both the examples themselves and their classifications can be probabilistic. The set of rules has to allow to predict the probability of the examples from their description. In this setting the parameters (the probability values) are fixed and the structure (the rules) are to be learned.
|
{
"cite_N": [
"@cite_21",
"@cite_42"
],
"mid": [
"1531743498",
"1824971879"
],
"abstract": [
"FOIL is a learning system that constructs Horn clause programs from examples. This paper summarises the development of FOIL from 1989 up to early 1993 and evaluates its effectiveness on a non-trivial sequence of learning tasks taken from a Prolog programming text. Although many of these tasks are handled reasonably well, the experiment highlights some weaknesses of the current implementation. Areas for further research are identified.",
"We introduce ProbLog, a probabilistic extension of Prolog. A ProbLog program defines a distribution over logic programs by specifying for each clause the probability that it belongs to a randomly sampled program, and these probabilities are mutually independent. The semantics of ProbLog is then defined by the success probability of a query, which corresponds to the probability that the query succeeds in a randomly sampled program. The key contribution of this paper is the introduction of an effective solver for computing success probabilities. It essentially combines SLD-resolution with methods for computing the probability of Boolean formulae. Our implementation further employs an approximation algorithm that combines iterative deepening with binary decision diagrams. We report on experiments in the context of discovering links in real biological networks, a demonstration of the practical usefulness of the approach."
]
}
|
1309.2080
|
2086351973
|
Learning probabilistic logic programming languages is receiving an increasing attention, and systems are available for learning the parameters (PRISM, LeProbLog, LFI-ProbLog and EMBLEM) or both structure and parameters (SEM-CP-logic and SLIPCASE) of these languages. In this paper we present the algorithm SLIPCOVER for “Structure LearnIng of Probabilistic logic programs by searChing OVER the clause space.” It performs a beam search in the space of probabilistic clauses and a greedy search in the space of theories using the log likelihood of the data as the guiding heuristics. To estimate the log likelihood, SLIPCOVER performs Expectation Maximization with EMBLEM. The algorithm has been tested on five real world datasets and compared with SLIPCASE, SEM-CP-logic, Aleph and two algorithms for learning Markov Logic Networks (Learning using Structural Motifs (LSM) and ALEPH++ExactL1). SLIPCOVER achieves higher areas under the precision-recall and receiver operating characteristic curves in most cases.
|
SEM-CP-logic @cite_46 learns parameters and structure of ground CP-logic programs. It performs learning by considering the Bayesian networks equivalent to CP-logic programs and by applying techniques for learning Bayesian networks. In particular, it applies the Structural Expectation Maximization (SEM) algorithm @cite_57 : it iteratively generates refinements of the equivalent Bayesian network and it greedily chooses the one that maximizes the BIC score @cite_5 . In SLIPCOVER, we used the LL as a score because experiments with BIC were giving inferior results. Moreover, SLIPCOVER differs from SEM-CP-logic also because it searches the clause space and it refines clauses with standard ILP refinement operators, which allow to learn non ground theories.
|
{
"cite_N": [
"@cite_57",
"@cite_5",
"@cite_46"
],
"mid": [
"1566045017",
"2168175751",
"2129578857"
],
"abstract": [
"In recent years there has been a flurry of works on learning Bayesian networks from data. One of the hard problems in this area is how to effectively learn the structure of a belief network from incomplete data--that is, in the presence of missing values or hidden variables. In a recent paper, I introduced an algorithm called Structural EM that combines the standard Expectation Maximization (EM) algorithm, which optimizes parameters, with structure search for model selection. That algorithm learns networks based on penalized likelihood scores, which include the BIC MDL score and various approximations to the Bayesian score. In this paper, I extend Structural EM to deal directly with Bayesian model selection. I prove the convergence of the resulting algorithm and show how to apply it for learning a large class of probabilistic models, including Bayesian networks and some variants thereof.",
"",
"Causal relations are present in many application domains. Causal Probabilistic Logic (CP-logic) is a probabilistic modeling language that is especially designed to express such relations. This paper investigates the learning of CP-logic theories (CP-theories) from training data. Its first contribution is SEM-CP-logic, an algorithm that learns CP-theories by leveraging Bayesian network (BN) learning techniques. SEM-CP-logic is based on a transformation between CP-theories and BNs. That is, the method applies BN learning techniques to learn a CP-theory in the form of an equivalent BN. To this end, certain modifications are required to the BN parameter learning and structure search, the most important one being that the refinement operator used by the search must guarantee that the constructed BNs represent valid CP-theories. The paper's second contribution is a theoretical and experimental comparison between CP-theory and BN learning. We show that the most simple CP-theories can be represented with BNs consisting of noisy-OR nodes, while more complex theories require close to fully connected networks (unless additional unobserved nodes are introduced in the network). Experiments in a controlled artificial domain show that in the latter cases CP-theory learning with SEM-CP-logic requires fewer training data than BN learning. We also apply SEM-CP-logic in a medical application in the context of HIV research, and show that it can compete with state-of-the-art methods in this domain."
]
}
|
1309.2080
|
2086351973
|
Learning probabilistic logic programming languages is receiving an increasing attention, and systems are available for learning the parameters (PRISM, LeProbLog, LFI-ProbLog and EMBLEM) or both structure and parameters (SEM-CP-logic and SLIPCASE) of these languages. In this paper we present the algorithm SLIPCOVER for “Structure LearnIng of Probabilistic logic programs by searChing OVER the clause space.” It performs a beam search in the space of probabilistic clauses and a greedy search in the space of theories using the log likelihood of the data as the guiding heuristics. To estimate the log likelihood, SLIPCOVER performs Expectation Maximization with EMBLEM. The algorithm has been tested on five real world datasets and compared with SLIPCASE, SEM-CP-logic, Aleph and two algorithms for learning Markov Logic Networks (Learning using Structural Motifs (LSM) and ALEPH++ExactL1). SLIPCOVER achieves higher areas under the precision-recall and receiver operating characteristic curves in most cases.
|
In @cite_2 , the structure of Markov Logic theories is learned by applying a generalization of relational pathfinding. A database is viewed as a hypergraph with constants as nodes and true ground atoms as hyperedges. Each hyperedge is labeled with a predicate symbol. First a hypergraph over clusters of constants is found, then pathfinding is applied on this lifted'' hypergraph. The resulting algorithm is called LHL.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2150678881"
],
"abstract": [
"Markov logic networks (MLNs) combine logic and probability by attaching weights to first-order clauses, and viewing these as templates for features of Markov networks. Learning MLN structure from a relational database involves learning the clauses and weights. The state-of-the-art MLN structure learners all involve some element of greedily generating candidate clauses, and are susceptible to local optima. To address this problem, we present an approach that directly utilizes the data in constructing candidates. A relational database can be viewed as a hypergraph with constants as nodes and relations as hyperedges. We find paths of true ground atoms in the hypergraph that are connected via their arguments. To make this tractable (there are exponentially many paths in the hypergraph), we lift the hypergraph by jointly clustering the constants to form higherlevel concepts, and find paths in it. We variabilize the ground atoms in each path, and use them to form clauses, which are evaluated using a pseudo-likelihood measure. In our experiments on three real-world datasets, we find that our algorithm outperforms the state-of-the-art approaches."
]
}
|
1309.2080
|
2086351973
|
Learning probabilistic logic programming languages is receiving an increasing attention, and systems are available for learning the parameters (PRISM, LeProbLog, LFI-ProbLog and EMBLEM) or both structure and parameters (SEM-CP-logic and SLIPCASE) of these languages. In this paper we present the algorithm SLIPCOVER for “Structure LearnIng of Probabilistic logic programs by searChing OVER the clause space.” It performs a beam search in the space of probabilistic clauses and a greedy search in the space of theories using the log likelihood of the data as the guiding heuristics. To estimate the log likelihood, SLIPCOVER performs Expectation Maximization with EMBLEM. The algorithm has been tested on five real world datasets and compared with SLIPCASE, SEM-CP-logic, Aleph and two algorithms for learning Markov Logic Networks (Learning using Structural Motifs (LSM) and ALEPH++ExactL1). SLIPCOVER achieves higher areas under the precision-recall and receiver operating characteristic curves in most cases.
|
A different approach is taken in @cite_10 where the algorithm DSL is presented, that performs discriminative structure learning by repeatedly adding a clause to the theory through iterated local search, which performs a walk in the space of local optima. We share with this approach the discriminative nature of the algorithm and the scoring function.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"1576159843"
],
"abstract": [
"Markov Logic Networks (MLNs) combine Markov networks and first-order logic by attaching weights to first-order formulas and viewing these as templates for features of Markov networks. Learning the structure of MLNs is performed by state-of-the-art methods by maximizing the likelihood of a relational database. This can lead to suboptimal results given prediction tasks. On the other hand better results in prediction problems have been achieved by discriminative learning of MLNs weights given a certain structure. In this paper we propose an algorithm for learning the structure of MLNs discriminatively by maximimizing the conditional likelihood of the query predicates instead of the joint likelihood of all predicates. The algorithm chooses the structures by maximizing conditional likelihood and sets the parameters by maximum likelihood. Experiments in two real-world domains show that the proposed algorithm improves over the state-of-the-art discriminative weight learning algorithm for MLNs in terms of conditional likelihood. We also compare the proposed algorithm with the state-of-the-art generative structure learning algorithm for MLNs and confirm the results in [22] showing that for small datasets the generative algorithm is competitive, while for larger datasets the discriminative algorithm outperfoms the generative one."
]
}
|
1309.1556
|
1906486268
|
A common approach to scaling transactional databases in practice is horizontal partitioning, which increases system scalability, high availability and self-manageability. Usu- ally it is very challenging to choose or design an optimal partitioning scheme for a given workload and database. In this technical report, we propose a fine-grained hyper-graph based database partitioning system for transactional work- loads. The partitioning system takes a database, a workload, a node cluster and partitioning constraints as input and out- puts a lookup-table encoding the final database partitioning decision. The database partitioning problem is modeled as a multi-constraints hyper-graph partitioning problem. By deriving a min-cut of the hyper-graph, our system can min- imize the total number of distributed transactions in the workload, balance the sizes and workload accesses of the partitions and satisfy all the partition constraints imposed. Our system is highly interactive as it allows users to im- pose partition constraints, watch visualized partitioning ef- fects, and provide feedback based on human expertise and indirect domain knowledge for generating better partition- ing schemes.
|
In the meantime, more ad-hoc and flexible partitioning schemes tailored for specific- purpose applications were also developed, such as the consistent hashing of Dynamo @cite_5 , Schism @cite_7 and One Hop Replication @cite_6 , etc..
|
{
"cite_N": [
"@cite_5",
"@cite_6",
"@cite_7"
],
"mid": [
"2153704625",
"",
"2133741724"
],
"abstract": [
"Reliability at massive scale is one of the biggest challenges we face at Amazon.com, one of the largest e-commerce operations in the world; even the slightest outage has significant financial consequences and impacts customer trust. The Amazon.com platform, which provides services for many web sites worldwide, is implemented on top of an infrastructure of tens of thousands of servers and network components located in many datacenters around the world. At this scale, small and large components fail continuously and the way persistent state is managed in the face of these failures drives the reliability and scalability of the software systems. This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon's core services use to provide an \"always-on\" experience. To achieve this level of availability, Dynamo sacrifices consistency under certain failure scenarios. It makes extensive use of object versioning and application-assisted conflict resolution in a manner that provides a novel interface for developers to use.",
"",
"We present Schism, a novel workload-aware approach for database partitioning and replication designed to improve scalability of shared-nothing distributed databases. Because distributed transactions are expensive in OLTP settings (a fact we demonstrate through a series of experiments), our partitioner attempts to minimize the number of distributed transactions, while producing balanced partitions. Schism consists of two phases: i) a workload-driven, graph-based replication partitioning phase and ii) an explanation and validation phase. The first phase creates a graph with a node per tuple (or group of tuples) and edges between nodes accessed by the same transaction, and then uses a graph partitioner to split the graph into k balanced partitions that minimize the number of cross-partition transactions. The second phase exploits machine learning techniques to find a predicate-based explanation of the partitioning strategy (i.e., a set of range predicates that represent the same replication partitioning scheme produced by the partitioner). The strengths of Schism are: i) independence from the schema layout, ii) effectiveness on n-to-n relations, typical in social network databases, iii) a unified and fine-grained approach to replication and partitioning. We implemented and tested a prototype of Schism on a wide spectrum of test cases, ranging from classical OLTP workloads (e.g., TPC-C and TPC-E), to more complex scenarios derived from social network websites (e.g., Epinions.com), whose schema contains multiple n-to-n relationships, which are known to be hard to partition. Schism consistently outperforms simple partitioning schemes, and in some cases proves superior to the best known manual partitioning, reducing the cost of distributed transactions up to 30 ."
]
}
|
1309.1556
|
1906486268
|
A common approach to scaling transactional databases in practice is horizontal partitioning, which increases system scalability, high availability and self-manageability. Usu- ally it is very challenging to choose or design an optimal partitioning scheme for a given workload and database. In this technical report, we propose a fine-grained hyper-graph based database partitioning system for transactional work- loads. The partitioning system takes a database, a workload, a node cluster and partitioning constraints as input and out- puts a lookup-table encoding the final database partitioning decision. The database partitioning problem is modeled as a multi-constraints hyper-graph partitioning problem. By deriving a min-cut of the hyper-graph, our system can min- imize the total number of distributed transactions in the workload, balance the sizes and workload accesses of the partitions and satisfy all the partition constraints imposed. Our system is highly interactive as it allows users to im- pose partition constraints, watch visualized partitioning ef- fects, and provide feedback based on human expertise and indirect domain knowledge for generating better partition- ing schemes.
|
Bubba provides many heuristic approaches to balance the access frequency rather than the actual number of tuples across partitions @cite_1 . This algorithm is simple and cheap, but doesn't guarantee perfect balancing of processing. Schism provides a novel workload aware graph-based partitioning scheme @cite_7 . The scheme can get balanced partitions and minimize the number of distributed transactions.
|
{
"cite_N": [
"@cite_1",
"@cite_7"
],
"mid": [
"1975577269",
"2133741724"
],
"abstract": [
"This paper examines the problem of data placement in Bubba, a highly-parallel system for data-intensive applications being developed at MCC. “Highly-parallel” implies that load balancing is a critical performance issue. “Data-intensive” means data is so large that operations should be executed where the data resides. As a result, data placement becomes a critical performance issue. In general, determining the optimal placement of data across processing nodes for performance is a difficult problem. We describe our heuristic approach to solving the data placement problem in Bubba. We then present experimental results using a specific workload to provide insight into the problem. Several researchers have argued the benefits of declustering (i e, spreading each base relation over many nodes). We show that as declustering is increased, load balancing continues to improve. However, for transactions involving complex joins, further declustering reduces throughput because of communications, startup and termination overhead. We argue that data placement, especially declustering, in a highly-parallel system must be considered early in the design, so that mechanisms can be included for supporting variable declustering, for minimizing the most significant overheads associated with large-scale declustering, and for gathering the required statistics.",
"We present Schism, a novel workload-aware approach for database partitioning and replication designed to improve scalability of shared-nothing distributed databases. Because distributed transactions are expensive in OLTP settings (a fact we demonstrate through a series of experiments), our partitioner attempts to minimize the number of distributed transactions, while producing balanced partitions. Schism consists of two phases: i) a workload-driven, graph-based replication partitioning phase and ii) an explanation and validation phase. The first phase creates a graph with a node per tuple (or group of tuples) and edges between nodes accessed by the same transaction, and then uses a graph partitioner to split the graph into k balanced partitions that minimize the number of cross-partition transactions. The second phase exploits machine learning techniques to find a predicate-based explanation of the partitioning strategy (i.e., a set of range predicates that represent the same replication partitioning scheme produced by the partitioner). The strengths of Schism are: i) independence from the schema layout, ii) effectiveness on n-to-n relations, typical in social network databases, iii) a unified and fine-grained approach to replication and partitioning. We implemented and tested a prototype of Schism on a wide spectrum of test cases, ranging from classical OLTP workloads (e.g., TPC-C and TPC-E), to more complex scenarios derived from social network websites (e.g., Epinions.com), whose schema contains multiple n-to-n relationships, which are known to be hard to partition. Schism consistently outperforms simple partitioning schemes, and in some cases proves superior to the best known manual partitioning, reducing the cost of distributed transactions up to 30 ."
]
}
|
1309.1556
|
1906486268
|
A common approach to scaling transactional databases in practice is horizontal partitioning, which increases system scalability, high availability and self-manageability. Usu- ally it is very challenging to choose or design an optimal partitioning scheme for a given workload and database. In this technical report, we propose a fine-grained hyper-graph based database partitioning system for transactional work- loads. The partitioning system takes a database, a workload, a node cluster and partitioning constraints as input and out- puts a lookup-table encoding the final database partitioning decision. The database partitioning problem is modeled as a multi-constraints hyper-graph partitioning problem. By deriving a min-cut of the hyper-graph, our system can min- imize the total number of distributed transactions in the workload, balance the sizes and workload accesses of the partitions and satisfy all the partition constraints imposed. Our system is highly interactive as it allows users to im- pose partition constraints, watch visualized partitioning ef- fects, and provide feedback based on human expertise and indirect domain knowledge for generating better partition- ing schemes.
|
@cite_15 provides a fine-grained partitioning called lookup tables for distributed databases. With this fine-grained partitioning, related individual tuples (e.g., cliques of friends) are co-located together in the same partition in order to reduce the number of distributed transactions. But for the tuple-level lookup table, the database need store a large amount of meta-data about which partition each tuple resides in. It consumes large storage space and makes the lookup operation not very efficient.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2006008271"
],
"abstract": [
"The standard way to get linear scaling in a distributed OLTP DBMS is to horizontally partition data across several nodes. Ideally, this partitioning will result in each query being executed at just one node, to avoid the overheads of distributed transactions and allow nodes to be added without increasing the amount of required coordination. For some applications, simple strategies, such as hashing on primary key, provide this property. Unfortunately, for many applications, including social networking and order-fulfillment, many-to-many relationships cause simple strategies to result in a large fraction of distributed queries. Instead, what is needed is a fine-grained partitioning, where related individual tuples (e.g., cliques of friends) are co-located together in the same partition. Maintaining such a fine-grained partitioning requires the database to store a large amount of metadata about which partition each tuple resides in. We call such metadata a lookup table, and present the design of a data distribution layer that efficiently stores these tables and maintains them in the presence of inserts, deletes, and updates. We show that such tables can provide scalability for several difficult to partition database workloads, including Wikipedia, Twitter, and TPC-E. Our implementation provides 40 to 300 better performance on these workloads than either simple range or hash partitioning and shows greater potential for further scale-out."
]
}
|
1309.1556
|
1906486268
|
A common approach to scaling transactional databases in practice is horizontal partitioning, which increases system scalability, high availability and self-manageability. Usu- ally it is very challenging to choose or design an optimal partitioning scheme for a given workload and database. In this technical report, we propose a fine-grained hyper-graph based database partitioning system for transactional work- loads. The partitioning system takes a database, a workload, a node cluster and partitioning constraints as input and out- puts a lookup-table encoding the final database partitioning decision. The database partitioning problem is modeled as a multi-constraints hyper-graph partitioning problem. By deriving a min-cut of the hyper-graph, our system can min- imize the total number of distributed transactions in the workload, balance the sizes and workload accesses of the partitions and satisfy all the partition constraints imposed. Our system is highly interactive as it allows users to im- pose partition constraints, watch visualized partitioning ef- fects, and provide feedback based on human expertise and indirect domain knowledge for generating better partition- ing schemes.
|
Consistent hashing @cite_10 can be used to minimize the data moving when doing re-partitioning. But it may cause nonuniform load distribution. Dynamo @cite_5 extends consistent hashing by adding virtual nodes. It provides different partitioning strategies on load distribution which can ensure uniform load distribution at the same time of providing excellent re-partitioning performance. Other works such as CRUSH @cite_3 and FastScale @cite_13 can also provide algorithms which can be used for re-partitioning.
|
{
"cite_N": [
"@cite_13",
"@cite_5",
"@cite_10",
"@cite_3"
],
"mid": [
"1533295456",
"2153704625",
"2020765652",
"2109706326"
],
"abstract": [
"Previous approaches to RAID scaling either require a very large amount of data to be migrated, or cannot tolerate multiple disk additions without resulting in disk imbalance. In this paper, we propose a new approach to RAID-0 scaling called FastScale. First, FastScale minimizes data migration, while maintaining a uniform data distribution. With a new and elastic addressing function, it moves only enough data blocks from old disks to fill an appropriate fraction of new disks without migrating data among old disks. Second, FastScale optimizes data migration with two techniques: (1) it accesses multiple physically successive blocks via a single I O, and (2) it records data migration lazily to minimize the number of metadata writes without compromising data consistency. Using several real system disk traces, our experiments show that compared with SLAS, one of the most efficient traditional approaches, FastScale can reduce redistribution time by up to 86.06 with smaller maximum response time of user I Os. The experiments also illustrate that the performance of the RAID-0 scaled using FastScale is almost identical with that of the round-robin RAID-0.",
"Reliability at massive scale is one of the biggest challenges we face at Amazon.com, one of the largest e-commerce operations in the world; even the slightest outage has significant financial consequences and impacts customer trust. The Amazon.com platform, which provides services for many web sites worldwide, is implemented on top of an infrastructure of tens of thousands of servers and network components located in many datacenters around the world. At this scale, small and large components fail continuously and the way persistent state is managed in the face of these failures drives the reliability and scalability of the software systems. This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon's core services use to provide an \"always-on\" experience. To achieve this level of availability, Dynamo sacrifices consistency under certain failure scenarios. It makes extensive use of object versioning and application-assisted conflict resolution in a manner that provides a novel interface for developers to use.",
"We describe a family of caching protocols for distrib-uted networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Our protocols are particularly designed for use with very large networks such as the Internet, where delays caused by hot spots can be severe, and where it is not feasible for every server to have complete information about the current state of the entire network. The protocols are easy to implement using existing network protocols such as TCP IP, and require very little overhead. The protocols work with local control, make efficient use of existing resources, and scale gracefully as the network grows. Our caching protocols are based on a special kind of hashing that we call consistent hashing. Roughly speaking, a consistent hash function is one which changes minimally as the range of the function changes. Through the development of good consistent hash functions, we are able to develop caching protocols which do not require users to have a current or even consistent view of the network. We believe that consistent hash functions may eventually prove to be useful in other applications such as distributed name servers and or quorum systems.",
"Emerging large-scale distributed storage systems are faced with the task of distributing petabytes of data among tens or hundreds of thousands of storage devices. Such systems must evenly distribute data and workload to efficiently utilize available resources and maximize system performance, while facilitating system growth and managing hardware failures. We have developed CRUSH, a scalable pseudorandom data distribution function designed for distributed object-based storage systems that efficiently maps data objects to storage devices without relying on a central directory. Because large systems are inherently dynamic, CRUSH is designed to facilitate the addition and removal of storage while minimizing unnecessary data movement. The algorithm accommodates a wide variety of data replication and reliability mechanisms and distributes data in terms of user-defined policies that enforce separation of replicas across failure domains."
]
}
|
1309.1518
|
2070400842
|
Multicast device-to-device (D2D) transmission is important for applications like local file transfer in commercial networks and is also a required feature in public safety networks. In this paper we propose a tractable baseline multicast D2D model, and use it to analyze important multicast metrics like the coverage probability, mean number of covered receivers and throughput. In addition, we examine how the multicast performance would be affected by certain factors like dynamics (due to e.g., mobility) and network assistance. Take the mean number of covered receivers as an example. We find that simple repetitive transmissions help but the gain quickly diminishes as the number of repetitions increases. Meanwhile, dynamics and network assistance (i.e., allowing the network to relay the multicast signals) can help cover more receivers. We also explore how to optimize multicasting, e.g. by choosing the optimal multicast rate and the optimal number of retransmission times.
|
In parallel with the academic studies, standardization effort in addressing multicast services has been is being undertaken and mainly focuses on single-rate multicast. For example, multicast services were addressed in GSM WCDMA and are being addressed in LTE by 3GPP; the 3GPP work item is known as multimedia broadcast and multicast service (MBMS) @cite_15 . Similarly, 3GPP2 addressed multicast services in CDMA2000 with the work item known as broadcast and multicast service (BCMCS) @cite_10 .
|
{
"cite_N": [
"@cite_15",
"@cite_10"
],
"mid": [
"2166830713",
"2006458760"
],
"abstract": [
"Multirate multicast is a powerful methodology of multimedia communication in heterogenous networks. A variant of multirate multicast motivated by scalable multimedia streaming is layered multicast, where the transmitted signal is presented in successive data layers. With recent advances of network coding theory, many layered multicast schemes using network coding have been proposed to improve the performance of traditional routing-based layered multicast. They divide the network into different layers and construct a unirate multicast network code for each layer. However, these schemes do not perform network coding between data layers, and consequently cannot realize the full potential of network coding. In this paper, we propose a novel approach to layered multicast that allows network coding of data in different layers. This relaxation lends the proposed scheme greater flexibility in optimizing the data flow than previous layered solutions, and thus achieves higher throughput.",
"Multicast is an efficient means of transmitting the same content to multiple receivers while minimizing network resource usage. Applications that can benefit from multicast such as multimedia streaming and download, are now being deployed over 3G wireless data networks. Existing multicast schemes transmit data at a fixed rate that can accommodate the farthest located users in a cell. However, users belonging to the same multicast group can have widely different channel conditions. Thus existing schemes are too conservative by limiting the throughput of users close to the base station. We propose two proportional fair multicast scheduling algorithms that can adapt to dynamic channel states in cellular data networks that use time division multiplexing: inter-group proportional fairness (IPF) and multicast proportional fairness (MPF). These scheduling algorithms take into account (1) reported data rate requests from users which dynamically change to match their link states to the base station, and (2) the average received throughput of each user inside its cell. This information is used by the base station to select an appropriate data rate for each group. We prove that IPF and MPF achieve proportional fairness among groups and among all users inside a cell respectively. Through extensive packet-level simulations, we demonstrate that these algorithms achieve good balance between throughput and fairness among users and groups."
]
}
|
1309.1101
|
2949444148
|
Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS) cloud computing become more widely accepted for HPC computations, scientists require more support from computer scientists and resource providers to develop efficient code and make optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. The use of such frameworks has implications for the sustainability of scientific software. In this paper we set out our developing understanding of these challenges based on work carried out in the libhpc project.
|
The UK e-Science Programme @cite_6 , which ran for many years, developed a range of tools and services to support the use of Grid computing infrastructure and funded a range of interdisciplinary, collaborative projects focusing on making use of this Grid computing infrastructure easier for scientists across a range of domains. Workflow environments such as Taverna @cite_12 provide a means of executing workflows consisting of multiple components that may be available locally or as remote Web Services. In addition to generic workflow systems, many systems have been developed to assist users in specific domains. Bioinformatics is an example of this where systems such as Galaxy @cite_13 or VisTrails @cite_8 provide domain-specific features to improve the user experience.
|
{
"cite_N": [
"@cite_13",
"@cite_12",
"@cite_6",
"@cite_8"
],
"mid": [
"2161645441",
"",
"1993008735",
"2145154883"
],
"abstract": [
"Accessing and analyzing the exponentially expanding genomic sequence and functional data pose a challenge for biomedical researchers. Here we describe an interactive system, Galaxy, that combines the power of existing genome annotation databases with a simple Web portal to enable users to search remote resources, combine data from independent queries, and visualize the results. The heart of Galaxy is a flexible history system that stores the queries from each user; performs operations such as intersections, unions, and subtractions; and links to other computational tools. Galaxy can be accessed at http: g2.bx.psu.edu.",
"",
"This paper describes the £ 120M UK 'e-Science' (http: www.research-councils.ac.uk and http: www.escience-grid.org.uk) initiative and begins by defining what is meant by the term e-Science. The majority of the £ 120M, some £ 75M, is funding large-scale e-Science pilot projects in many areas of science and engineering. The infrastructure needed to support such projects must permit routine sharing of distributed and heterogeneous computational and data resources as well as supporting effective collaboration between groups of scientists. Such an infrastructure is commonly referred to as the Grid. Apart from £ 10M towards a Teraflop computer, the remaining funds, some £ 35M, constitute the e-Science 'Core Programme'. The goal of this Core Programme is to advance the development of robust and generic Grid middleware in collaboration with industry. The key elements of the Core Programme will be outlined including details of a UK e-Science Grid testbed. The pilot e-Science projects that have so far been announced are then briefly described. These projects span a range of disciplines from particle physics and astronomy to engineering and healthcare, and illustrate the breadth of the UK e-Science Programme. In addition to these major e-Science projects, the Core Programme is funding a series of short-term e-Science demonstrators across a number of disciplines as well as projects in network traffic engineering and some international collaborative activities. We conclude with some remarks about the need to develop a data architecture for the Grid that will allow federated access to relational databases as well as flat files.",
"Scientists are now faced with an incredible volume of data to analyze. To successfully analyze and validate various hypothesis, it is necessary to pose several queries, correlate disparate data, and create insightful visualizations of both the simulated processes and observed phenomena. Often, insight comes from comparing the results of multiple visualizations. Unfortunately, today this process is far from interactive and contains many error-prone and time-consuming tasks. As a result, the generation and maintenance of visualizations is a major bottleneck in the scientific process, hindering both the ability to mine scientific data and the actual use of the data. The VisTrails system represents our initial attempt to improve the scientific discovery process and reduce the time to insight. In VisTrails, we address the problem of visualization from a data management perspective: VisTrails manages the data and metadata of a visualization product. In this demonstration, we show the power and flexibility of our system by presenting actual scenarios in which scientific visualization is used and showing how our system improves usability, enables reproducibility, and greatly reduces the time required to create scientific visualizations."
]
}
|
1309.1125
|
2396034235
|
We present an open-domain Question-Answering system that learns to answer questions based on successful past interactions. We follow a pattern-based approach to Answer-Extraction, where (lexico-syntactic) patterns that relate a question to its answer are automatically learned and used to answer future questions. Results show that our approach contributes to the system's best performance when it is conjugated with typical Answer-Extraction strategies. Moreover, it allows the system to learn with the answered questions and to rectify wrong or unsolved past questions.
|
The focus has turned to automatic pattern learning, with many approaches taking advantage of the large amount of information on the Web. For example, @cite_0 and @cite_1 learn surface patterns for a particular relation. Javelin @cite_2 adds flexibility to the learned patterns by allowing, for instance, certain terms to be generalized into named entities. In all the previous cases, seeds are sets of two (manually chosen) entities. On the contrary, Ephyra system @cite_6 takes as input questions and answers, and learns patterns between the answer and two or more key-phrases. However, these are extracted from the question by hand-made patterns. Previous approaches learn lexical patterns; others learn patterns also conveying syntactic information in the form of dependency relations @cite_9 @cite_7 . Particularly, Shen al define a pattern as the smallest dependency tree that conveys the answer and one question term.
|
{
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_2"
],
"mid": [
"1771597977",
"1483543726",
"1561768937",
"2153820555",
"2167435923",
"93652244"
],
"abstract": [
"One of the most accurate methods in Question Answering (QA) uses off-line information extraction to find answers for frequently asked questions. It requires automatic extraction from text of all relation instances for relations that users frequently ask for. In this chapter, two methods are presented for learning relation instances for relations relevant in a closed and open domain (medical) QA system. Both methods try to learn automatic dependency paths that typically connect two arguments of a given relation. The first (lightly supervised) method starts from a seed list of argument instances, and extracts dependency paths from all sentences in which a seed pair occurs. This method works well for large text collections and for seeds which are easily identified, such as named entities, and is well-suited for open domain QA. A second experiment concentrates on medical relation extraction for the question answering module of the IMIX system. The IMIX corpus is relatively small and relation instances may contain complex noun phrases that do not occur frequently in the exact same form in the corpus. In this case, learning from annotated data is necessary. Dependency patterns enriched with semantic concept labels are shown to give accurate results for relations that are relevant for a medical QA system. Both methods improve the performance of the Dutch QA system Joost.",
"In this paper, we explore the syntactic relation patterns for open-domain factoid question answering. We propose a pattern extraction method to extract the various relations between the proper answers and different types of question words, including target words, head words, subject words and verbs, from syntactic trees. We further propose a QA-specific tree kernel to partially match the syntactic relation patterns. It makes the more tolerant matching between two patterns and helps to solve the data sparseness problem. Lastly, we incorporate the patterns into a Maximum Entropy Model to rank the answer candidates. The experiment on TREC questions shows that the syntactic relation patterns help to improve the performance by 6.91 MRR based on the common features.",
"We describe herein a Web based pattern mining and matching approach to question answering. For each type of questions, a lot of textual patterns can be learned from the Web automatically, using the TREC QA track data as training examples. These textual patterns are assessed by the concepts of support and confidence, which are borrowed from the data mining community. Given a new unseen question, these textual patterns can be utilized to extract and rank the plausible answers on the Web. The performance of this approach has been evaluated also by the TREC QA track data.",
"This paper describes the Ephyra question answering engine, a modular and extensible framework that allows to integrate multiple approaches to question answering in one system Our framework can be adapted to languages other than English by replacing language-specific components It supports the two major approaches to question answering, knowledge annotation and knowledge mining Ephyra uses the web as a data resource, but could also work with smaller corpora In addition, we propose a novel approach to question interpretation which abstracts from the original formulation of the question Text patterns are used to interpret a question and to extract answers from text snippets Our system automatically learns the patterns for answer extraction, using question-answer pairs as training data Experimental results revealed the potential of this approach.",
"In this paper we explore the power of surface text patterns for open-domain question answering systems. In order to obtain an optimal set of patterns, we have developed a method for learning such patterns automatically. A tagged corpus is built from the Internet in a bootstrapping process by providing a few hand-crafted examples of each question type to Altavista. Patterns are then automatically extracted from the returned documents and standardized. We calculate the precision of each pattern, and the average precision for each question type. These patterns are then applied to find answers to new questions. Using the TREC-10 question set, we report results for two cases: answers determined from the TREC-10 corpus and from the web.",
"We describe Javelin, a Cross-lingual Question Answering system which participated in the NTCIR-8 ACLIA evaluation and which is designed to work on any type of question, including factoid and complex questions. The key technical contribution of this paper is a minimally supervised bootstrapping approach to generating lexicosyntactic patterns used for answer extraction. The preliminary evaluation result (measured by nugget F3 score) shows that the proposed pattern learning approach outperformed two baselines, a supervised learning approach used in NTCIR-7 ACLIA and a simple key-term based approach, for both monolingual and crosslingual tracks. The proposed approach is general and thus it has potential applicability to a wide variety of information access applications which require deeper semantic processing."
]
}
|
1309.0861
|
2275960700
|
Wireless transmission using non-contiguous chunks of spectrum is becoming increasingly important due to a variety of scenarios such as: secondary users avoiding incumbent users in TV white space; anticipated spectrum sharing between commercial and military systems; and spectrum sharing among uncoordinated interferers in unlicensed bands. Multi-Channel Multi-Radio (MCMR) platforms and Non-Contiguous Orthogonal Frequency Division Multiple Access (NC-OFDMA) technology are the two commercially viable transmission choices to access these non-contiguous spectrum chunks. Fixed MC-MRs do not scale with increasing number of non-contiguous spectrum chunks due to their fixed set of supporting radio front ends. NC-OFDMA allows nodes to access these non-contiguous spectrum chunks and put null sub-carriers in the remaining chunks. However, nulling sub-carriers increases the sampling rate (spectrum span) which, in turn, increases the power consumption of radio front ends. Our work characterizes this trade-off from a cross-layer perspective, specifically by showing how the slope of ADC DAC power consumption versus sampling rate curve influences scheduling decisions in a multi-hop network. Specifically, we provide a branch and bound algorithm based mixed integer linear programming solution that performs joint power control, spectrum span selection, scheduling and routing in order to minimize the system power of multi-hop NC-OFDMA networks. We also provide a low complexity (O(E^2 M^2)) greedy algorithm where M and E denote the number of channels and links respectively. Numerical simulations suggest that our approach reduces system power by 30 over classical transmit power minimization based cross-layer algorithms.
|
The authors of @cite_9 @cite_3 characterized the capacity region of an MC-MR based multi-hop network. The authors of @cite_30 @cite_21 focused on software defined radio based multi-hop networks and performed cross layer optimization using a protocol and signal-to-interference-plus-noise-ratio model respectively. Shi and Hou extended the work of @cite_30 and provided a distributed algorithm in @cite_19 . None of these works considered circuit power and addressed how spectrum fragmentation influences cross-layer decisions.
|
{
"cite_N": [
"@cite_30",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_19"
],
"mid": [
"2095943931",
"2092511902",
"2122824681",
"2163418239",
"2101607865"
],
"abstract": [
"Software defined radio (SDR) is a revolution in radio technology that promises unprecedented flexibility in radio communications and is viewed as an enabling technology for dynamic spectrum access. This paper investigates how to support user communication sessions by jointly considering power control, scheduling, and flow routing for an SDR-based multi-hop wireless network. We develop a formal mathematical model for scheduling feasibility under the influence of power control. This model extends existing protocol interference model for wireless networks and can be used for a broad class of problems where power control (and thus transmission range and interference range) is part of the optimization space. We formulate a cross-layer optimization problem encompassing power control, scheduling, and flow routing. Subsequently, we develop an efficient solution procedure based on branch-and-bound technique and convex hull relaxation. Using simulation results, we demonstrate the efficacy of the solution procedure and offer insights on the impact of power control on scheduling feasibility, bandwidth efficiency, and bandwidth-footprint product (BFP).",
"Next generation fixed wireless broadband networks are being increasingly deployed as mesh networks in order to provide and extend access to the internet. These networks are characterized by the use of multiple orthogonal channels and nodes with the ability to simultaneously communicate with many neighbors using multiple radios (interfaces) over orthogonal channels. Networks based on the IEEE 802.11a b g and 802.16 standards are examples of these systems. However, due to the limited number of available orthogonal channels, interference is still a factor in such networks. In this paper, we propose a network model that captures the key practical aspects of such systems and characterize the constraints binding their behavior. We provide necessary conditions to verify the feasibility of rate vectors in these networks, and use them to derive upper bounds on the capacity in terms of achievable throughput, using a fast primal-dual algorithm. We then develop two link channel assignment schemes, one static and the other dynamic, in order to derive lower bounds on the achievable throughput. We demonstrate through simulations that the dynamic link channel assignment scheme performs close to optimal on the average, while the static link channel assignment algorithm also performs very well. The methods proposed in this paper can be a valuable tool for network designers in planning network deployment and for optimizing different performance objectives.",
"Cognitive radio networks (CRNs) have the potential to utilize spectrum efficiently and are positioned to be the core technology for the next-generation multihop wireless networks. An important problem for such networks is its capacity. We study this problem for CRNs in the SINR (signal-to-interference-and-noise-ratio) model, which is considered to be a better characterization of interference (but also more difficult to analyze) than disk graph model. The main difficulties of this problem are two-fold. First, SINR is a nonconvex function of transmission powers; an optimization problem in the SINR model is usually a nonconvex program and NP-hard in general. Second, in the SINR model, scheduling feasibility and the maximum allowed flow rate on each link are determined by SINR at the physical layer. To maximize capacity, it is essential to follow a cross-layer approach, but joint optimization at physical (power control), link (scheduling), and network (flow routing) layers with the SINR function is inherently difficult. In this paper, we give a mathematical characterization of the joint relationship among these layers. We devise a solution procedure that provides a (1- ) optimal solution to this complex problem, where is the required accuracy. Our theoretical result offers a performance benchmark for any other algorithms developed for practical implementation. Using numerical results, we demonstrate the efficacy of the solution procedure and offer quantitative understanding on the interaction of power control, scheduling, and flow routing in a CRN.",
"This paper studies how the capacity of a static multi-channel network scales as the number of nodes, n, increases. Gupta and Kumar have determined the capacity of single-channel networks, and those bounds are applicable to multi-channel networks as well, provided each node in the network has a dedicated interface per channel.In this work, we establish the capacity of general multi-channel networks wherein the number of interfaces, m, may be smaller than the number of channels, c. We show that the capacity of multi-channel networks exhibits different bounds that are dependent on the ratio between c and m. When the number of interfaces per node is smaller than the number of channels, there is a degradation in the network capacity in many scenarios. However, one important exception is a random network with up to O(log n) channels, wherein the network capacity remains at the Gupta and Kumar bound of Θ(W√noverlog n) bits sec, independent of the number of interfaces available at each node. Since in many practical networks, number of channels available is small (e.g., IEEE 802.11 networks), this bound is of practical interest. This implies that it may be possible to build capacity-optimal multi-channel networks with as few as one interface per node. We also extend our model to consider the impact of interface switching delay, and show that in a random network with up to O(log n) channels, switching delay may not affect capacity if multiple interfaces are used.",
"Cognitive radio (CR) is a revolution in radio technology and is viewed as an enabling technology for dynamic spectrum access. This paper investigates how to design distributed algorithm for a future multi-hop CR network, with the objective of maximizing data rates for a set of user communication sessions. We study this problem via a cross-layer optimization approach, with joint consideration of power control, scheduling, and routing. The main contribution of this paper is the development of a distributed optimization algorithm that iteratively increases data rates for user communication sessions. During each iteration, there are two separate processes, a Conservative Iterative Process (CIP) and an Aggressive Iterative Process (AIP). For both CIP and AIP, we describe our design of routing, minimalist scheduling, and power control scheduling modules. To evaluate the performance of the distributed optimization algorithm, we compare it to an upper bound of the objective function, since the exact optimal solution to the objective function cannot be obtained via its mixed integer nonlinear programming (MINLP) formulation. Since the achievable performance via our distributed algorithm is close to the upper bound and the optimal solution (unknown) lies between the upper bound and the feasible solution obtained by our distributed algorithm, we conclude that the results obtained by our distributed algorithm are very close to the optimal solution."
]
}
|
1309.0861
|
2275960700
|
Wireless transmission using non-contiguous chunks of spectrum is becoming increasingly important due to a variety of scenarios such as: secondary users avoiding incumbent users in TV white space; anticipated spectrum sharing between commercial and military systems; and spectrum sharing among uncoordinated interferers in unlicensed bands. Multi-Channel Multi-Radio (MCMR) platforms and Non-Contiguous Orthogonal Frequency Division Multiple Access (NC-OFDMA) technology are the two commercially viable transmission choices to access these non-contiguous spectrum chunks. Fixed MC-MRs do not scale with increasing number of non-contiguous spectrum chunks due to their fixed set of supporting radio front ends. NC-OFDMA allows nodes to access these non-contiguous spectrum chunks and put null sub-carriers in the remaining chunks. However, nulling sub-carriers increases the sampling rate (spectrum span) which, in turn, increases the power consumption of radio front ends. Our work characterizes this trade-off from a cross-layer perspective, specifically by showing how the slope of ADC DAC power consumption versus sampling rate curve influences scheduling decisions in a multi-hop network. Specifically, we provide a branch and bound algorithm based mixed integer linear programming solution that performs joint power control, spectrum span selection, scheduling and routing in order to minimize the system power of multi-hop NC-OFDMA networks. We also provide a low complexity (O(E^2 M^2)) greedy algorithm where M and E denote the number of channels and links respectively. Numerical simulations suggest that our approach reduces system power by 30 over classical transmit power minimization based cross-layer algorithms.
|
Consideration of system power has been gaining attention in energy efficient wireless communications literature @cite_22 . Cui et. al. focused on system energy constrained modulation optimization in @cite_38 . Sahai et. al. investigated system power consumption -- especially decoder power consumption -- in @cite_25 . Isheden and Fettweis assumed circuit power to be a linear function of the data rate @cite_26 . All these works focused on single transceiver pair. Our approach differs from these works in the following way: in NC-OFDMA technology, ADC and DAC consume power not only for used channels (i.e. transmitted data) but also for nulled channels. Our work considers the power consumption related to spectrum span and investigates the performance of NC-OFDMA based multi-hop networks.
|
{
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_25",
"@cite_22"
],
"mid": [
"2152481521",
"2147437672",
"1991470928",
""
],
"abstract": [
"Wireless systems where the nodes operate on batteries so that energy consumption must be minimized while satisfying given throughput and delay requirements are considered. In this context, the best modulation strategy to minimize the total energy consumption required to send a given number of bits is analyzed. The total energy consumption includes both the transmission energy and the circuit energy consumption. For uncoded systems, by optimizing the transmission time and the modulation parameters, it is shown that up to 80 energy savings is achievable over nonoptimized systems. For coded systems, it is shown that the benefit of coding varies with the transmission distance and the underlying modulation schemes.",
"Energy-efficient link adaptation is studied for transmission on a frequency-selective parallel AWGN channel. The total power dissipation model includes a circuit power that varies with the sum rate and a power amplifier efficiency that varies with the bandwidth used. The mathematical analysis provides insight into how the subcarrier rates should be chosen for optimal energy efficiency and suggests a simple fixed-point algorithm that finds the solution in few iterations. Moreover, ways of improving the energy efficiency are discussed based on the dependence on bandwidth and distance between transmitter and receiver.",
"Traditional communication theory focuses on minimizing transmit power. However, communication links are increasingly operating at shorter ranges where transmit power can be significantly smaller than the power consumed in decoding. This paper models the required decoding power and investigates the minimization of total system power from two complementary perspectives. First, an isolated point-to-point link is considered. Using new lower bounds on the complexity of message-passing decoding, lower bounds are derived on decoding power. These bounds show that 1) there is a fundamental tradeoff between transmit and decoding power; 2) unlike the implications of the traditional \"waterfall\" curve which focuses on transmit power, the total power must diverge to infinity as error probability goes to zero; 3) Regular LDPCs, and not their known capacity-achieving irregular counterparts, can be shown to be power order optimal in some cases; and 4) the optimizing transmit power is bounded away from the Shannon limit. Second, we consider a collection of links. When systems both generate and face interference, coding allows a system to support a higher density of transmitter-receiver pairs (assuming interference is treated as noise). However, at low densities, uncoded transmission may be more power-efficient in some cases.",
""
]
}
|
1309.1049
|
1621071313
|
Many real-world systems, such as social networks, rely on mining efficiently large graphs, with hundreds of millions of vertices and edges. This volume of information requires partitioning the graph across multiple nodes in a distributed system. This has a deep effect on performance, as traversing edges cut between partitions incurs a significant performance penalty due to the cost of communication. Thus, several systems in the literature have attempted to improve computational performance by enhancing graph partitioning, but they do not support another characteristic of real-world graphs: graphs are inherently dynamic, their topology evolves continuously, and subsequently the optimum partitioning also changes over time. In this work, we present the first system that dynamically repartitions massive graphs to adapt to structural changes. The system optimises graph partitioning to prevent performance degradation without using data replication. The system adopts an iterative vertex migration algorithm that relies on local information only, making complex coordination unnecessary. We show how the improvement in graph partitioning reduces execution time by over 50 , while adapting the partitioning to a large number of changes to the graph in three real-world scenarios.
|
The idea of partitioning the graph to minimise network communication is not new and it has inspired several techniques to co-locate neighbouring vertices in the same host @cite_18 @cite_20 @cite_3 @cite_38 @cite_23 @cite_41 @cite_26 . These approaches try to exploit the locality present in the graphs, whether due to the vertices being geographically close in social networks, close molecules establishing chemical bonds, or web pages related by topic or domain, by placing neighbouring vertices in the same partition. The parallel version of METIS @cite_6 , ParMETIS @cite_13 , leverages parallel processing for partitioning the graph, through multilevel k-way partitioning, adaptive re-partitioning, and parallel multi-constrained partitioning schemes, but requires a global view of the graph that greatly reduces its scalability @cite_22 . Other techniques have been explored that study graph properties projected onto a small subset of vertices @cite_2 @cite_26 . These may be effective in some particular contexts, but they are not broadly applicable.
|
{
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_41",
"@cite_3",
"@cite_6",
"@cite_23",
"@cite_2",
"@cite_13",
"@cite_20"
],
"mid": [
"2151936673",
"2029282358",
"2133965272",
"1969970763",
"2088844265",
"",
"1639284002",
"2131681506",
"2152171734",
"",
""
],
"abstract": [
"Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as “modularity” over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets.",
"This paper introduces a new divide-and-conquer framework for VLSI graph layout. Universally close upper and lower bounds are obtained for important cost functions such as layout area and propagation delay. The framework is also effectively used to design regular and configurable layouts, to assemble large networks of processor using restructurable chips, and to configure networks around faulty processors. It is also shown how good graph partitioning heuristics may be used to develop provably good layout strategy.",
"",
"GPS (for Graph Processing System) is a complete open-source system we developed for scalable, fault-tolerant, and easy-to-program execution of algorithms on extremely large graphs. This paper serves the dual role of describing the GPS system, and presenting techniques and experimental results for graph partitioning in distributed graph-processing systems like GPS. GPS is similar to Google's proprietary Pregel system, with three new features: (1) an extended API to make global computations more easily expressed and more efficient; (2) a dynamic repartitioning scheme that reassigns vertices to different workers during the computation, based on messaging patterns; and (3) an optimization that distributes adjacency lists of high-degree vertices across all compute nodes to improve performance. In addition to presenting the implementation of GPS and its novel features, we also present experimental results on the performance effects of both static and dynamic graph partitioning schemes, and we describe the compilation of a high-level domain-specific programming language to GPS, enabling easy expression of complex algorithms.",
"We give a O(slog n)-approximation algorithm for the sparsest cut, edge expansion, balanced separator, and graph conductance problems. This improves the O(log n)-approximation of Leighton and Rao (1988). We use a well-known semidefinite relaxation with triangle inequality constraints. Central to our analysis is a geometric theorem about projections of point sets in Rd, whose proof makes essential use of a phenomenon called measure concentration. We also describe an interesting and natural “approximate certificate” for a graph's expansion, which involves embedding an n-node expander in it with appropriate dilation and congestion. We call this an expander flow.",
"",
"",
"We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks.",
"Graphical relationships among Web pages have been exploited inmethods for ranking search results. To date, specific graphicalproperties have been used in these analyses. We introduce a WebProjection methodology that generalizes prior efforts of graphicalrelationships of the web in several ways. With the approach, wecreate subgraphs by projecting sets of pages and domains onto thelarger web graph, and then use machine learning to constructpredictive models that consider graphical properties as evidence. Wedescribe the method and then present experiments that illustrate theconstruction of predictive models of search result quality and userquery reformulation.",
"",
""
]
}
|
1309.1049
|
1621071313
|
Many real-world systems, such as social networks, rely on mining efficiently large graphs, with hundreds of millions of vertices and edges. This volume of information requires partitioning the graph across multiple nodes in a distributed system. This has a deep effect on performance, as traversing edges cut between partitions incurs a significant performance penalty due to the cost of communication. Thus, several systems in the literature have attempted to improve computational performance by enhancing graph partitioning, but they do not support another characteristic of real-world graphs: graphs are inherently dynamic, their topology evolves continuously, and subsequently the optimum partitioning also changes over time. In this work, we present the first system that dynamically repartitions massive graphs to adapt to structural changes. The system optimises graph partitioning to prevent performance degradation without using data replication. The system adopts an iterative vertex migration algorithm that relies on local information only, making complex coordination unnecessary. We show how the improvement in graph partitioning reduces execution time by over 50 , while adapting the partitioning to a large number of changes to the graph in three real-world scenarios.
|
Beyond these fixed techniques for static graphs, the need to continuously adapt to changes to the graph structure without the overhead of re-loading an updated snapshot of the graph or re-partitioning from scratch, has been recently reported in practical @cite_8 @cite_21 @cite_0 and more theoretical @cite_39 studies. Few systems can cope with run time changes in structure @cite_29 @cite_25 @cite_32 . However, these cannot handle structural graph changes, either degrading partition quality or fully triggering the partitioning process.
|
{
"cite_N": [
"@cite_8",
"@cite_29",
"@cite_21",
"@cite_32",
"@cite_39",
"@cite_0",
"@cite_25"
],
"mid": [
"2145422369",
"2124939717",
"2074617510",
"2130747448",
"2082195599",
"2040395299",
"2063032661"
],
"abstract": [
"We investigate the design and implementation of a parallel workflow environment targeted towards the financial industry. The system performs real-time correlation analysis and clustering to identify trends within streaming high-frequency intra-day trading data. Our system utilizes state-of-the-art methods to optimize the delivery of computationally-expensive real-time stock market data analysis, with direct applications in automated algorithmic trading as well as knowledge discovery in high-throughput electronic exchanges. This paper describes the design of the system including the key online parallel algorithms for robust correlation calculation and clique-based clustering using stochastic local search. We evaluate the performance and scalability of the system, followed by a preliminary analysis of the results using data from the Toronto Stock Exchange.",
"This paper describes the Scalable Hyperlink Store, a distributed in-memory \"database\" for storing large portions of the web graph. SHS is an enabler for research on structural properties of the web graph as well as new link-based ranking algorithms. Previous work on specialized hyperlink databases focused on finding efficient compression algorithms for web graphs. By contrast, this work focuses on the systems issues of building such a database. Specifically, it describes how to build a hyperlink database that is fast, scalable, fault-tolerant, and incrementally updateable.",
"Network science is an interdisciplinary endeavor, with methods and applications drawn from across the natural, social, and information sciences. A prominent problem in network science is the algorithmic detection of tightly connected groups of nodes known as communities. We developed a generalized framework of network quality functions that allowed us to study the community structure of arbitrary multislice networks, which are combinations of individual networks coupled through links that connect each node in one network slice to itself in other slices. This framework allows studies of community structure in a general setting encompassing networks that evolve over time, have multiple types of links (multiplexity), and have multiple scales.",
"Kineograph is a distributed system that takes a stream of incoming data to construct a continuously changing graph, which captures the relationships that exist in the data feed. As a computing platform, Kineograph further supports graph-mining algorithms to extract timely insights from the fast-changing graph structure. To accommodate graph-mining algorithms that assume a static underlying graph, Kineograph creates a series of consistent snapshots, using a novel and efficient epoch commit protocol. To keep up with continuous updates on the graph, Kineograph includes an incremental graph-computation engine. We have developed three applications on top of Kineograph to analyze Twitter data: user ranking, approximate shortest paths, and controversial topic detection. For these applications, Kineograph takes a live Twitter data feed and maintains a graph of edges between all users and hashtags. Our evaluation shows that with 40 machines processing 100K tweets per second, Kineograph is able to continuously compute global properties, such as user ranks, with less than 2.5-minute timeliness guarantees. This rate of traffic is more than 10 times the reported peak rate of Twitter as of October 2011.",
"Real complex systems are inherently time-varying. Thanks to new communication systems and novel technologies, today it is possible to produce and analyze social and biological networks with detailed information on the time of occurrence and duration of each link. However, standard graph metrics introduced so far in complex network theory are mainly suited for static graphs, i.e., graphs in which the links do not change over time, or graphs built from time-varying systems by aggregating all the links as if they were concurrent in time. In this paper, we extend the notion of connectedness, and the definitions of node and graph components, to the case of time-varying graphs, which are represented as time-ordered sequences of graphs defined over a fixed set of nodes. We show that the problem of finding strongly connected components in a time-varying graph can be mapped into the problem of discovering the maximal-cliques in an opportunely constructed static graph, which we name the affine graph. It is, therefore, an NP-complete problem. As a practical example, we have performed a temporal component analysis of time-varying graphs constructed from three data sets of human interactions. The results show that taking time into account in the definition of graph components allows to capture important features of real systems. In particular, we observe a large variability in the size of node temporal in- and out-components. This is due to intrinsic fluctuations in the activity patterns of individuals, which cannot be detected by static graph analysis.",
"In this paper, we present a means to fold amino acid interaction networks. This is a graph whose vertices are the proteins amino acids and whose edges are the interactions between them. Our approach consists in exploiting the parallel between topological and structural properties. Thus, we establish a relation between the sequence and the structure relying on topological criteria. To fold this type of graph, we limit the topological space and we exploit an ant colony approach. We consider those graphs as dynamic graphs so that we can observe gradually the graph properties during the folding process.",
"Searching and mining large graphs today is critical to a variety of application domains, ranging from community detection in social networks, to de novo genome sequence assembly. Scalable processing of large graphs requires careful partitioning and distribution of graphs across clusters. In this paper, we investigate the problem of managing large-scale graphs in clusters and study access characteristics of local graph queries such as breadth-first search, random walk, and SPARQL queries, which are popular in real applications. These queries exhibit strong access locality, and therefore require specific data partitioning strategies. In this work, we propose a Self Evolving Distributed Graph Management Environment (Sedge), to minimize inter-machine communication during graph query processing in multiple machines. In order to improve query response time and throughput, Sedge introduces a two-level partition management architecture with complimentary primary partitions and dynamic secondary partitions. These two kinds of partitions are able to adapt in real time to changes in query workload. (Sedge) also includes a set of workload analyzing algorithms whose time complexity is linear or sublinear to graph size. Empirical results show that it significantly improves distributed graph processing on today's commodity clusters."
]
}
|
1309.1049
|
1621071313
|
Many real-world systems, such as social networks, rely on mining efficiently large graphs, with hundreds of millions of vertices and edges. This volume of information requires partitioning the graph across multiple nodes in a distributed system. This has a deep effect on performance, as traversing edges cut between partitions incurs a significant performance penalty due to the cost of communication. Thus, several systems in the literature have attempted to improve computational performance by enhancing graph partitioning, but they do not support another characteristic of real-world graphs: graphs are inherently dynamic, their topology evolves continuously, and subsequently the optimum partitioning also changes over time. In this work, we present the first system that dynamically repartitions massive graphs to adapt to structural changes. The system optimises graph partitioning to prevent performance degradation without using data replication. The system adopts an iterative vertex migration algorithm that relies on local information only, making complex coordination unnecessary. We show how the improvement in graph partitioning reduces execution time by over 50 , while adapting the partitioning to a large number of changes to the graph in three real-world scenarios.
|
Some techniques try to alleviate performance degradation by optimising partitioning during the initial loading of the graph in memory (i.e. they do not adapt in run time). For instance, in @cite_33 authors evaluate a set of simple heuristics based on the idea of exploiting locality, and apply them on a single streaming pass over a graph, with competitive results and low computation cost. The authors show the benefits of this approach in real systems. In addition to adaptations to changes in structure, some systems dynamically adapt the partitioning of the graph to the bandwidth characteristics of the underlying computer network to maximise throughput @cite_35 . Mizan @cite_10 ignores the graph topology and instead optimises the system by performing runtime monitoring and load balancing. The graph processing system finds hotspots in specific workers and migrates vertices to a paired worker who have the highest number of outgoing messages in an attempt to balance the load.
|
{
"cite_N": [
"@cite_35",
"@cite_10",
"@cite_33"
],
"mid": [
"2166860697",
"2064635301",
"1971630691"
],
"abstract": [
"As the study of large graphs over hundreds of gigabytes becomes increasingly popular for various data-intensive applications in cloud computing, developing large graph processing systems has become a hot and fruitful research area. Many of those existing systems support a vertex-oriented execution model and allow users to develop custom logics on vertices. However, the inherently random access pattern on the vertex-oriented computation generates a significant amount of network traffic. While graph partitioning is known to be effective to reduce network traffic in graph processing, there is little attention given to how graph partitioning can be effectively integrated into large graph processing in the cloud environment. In this paper, we develop a novel graph partitioning framework to improve the network performance of graph partitioning itself, partitioned graph storage and vertex-oriented graph processing. All optimizations are specifically designed for the cloud network environment. In experiments, we develop a system prototype following Pregel (the latest vertex-oriented graph engine by Google), and extend it with our graph partitioning framework. We conduct the experiments with a real-world social network and synthetic graphs over 100GB each in a local cluster and on Amazon EC2. Our experimental results demonstrate the efficiency of our graph partitioning framework, and the effectiveness of network performance aware optimizations on the large graph processing engine.",
"Pregel [23] was recently introduced as a scalable graph mining system that can provide significant performance improvements over traditional MapReduce implementations. Existing implementations focus primarily on graph partitioning as a preprocessing step to balance computation across compute nodes. In this paper, we examine the runtime characteristics of a Pregel system. We show that graph partitioning alone is insufficient for minimizing end-to-end computation. Especially where data is very large or the runtime behavior of the algorithm is unknown, an adaptive approach is needed. To this end, we introduce Mizan, a Pregel system that achieves efficient load balancing to better adapt to changes in computing needs. Unlike known implementations of Pregel, Mizan does not assume any a priori knowledge of the structure of the graph or behavior of the algorithm. Instead, it monitors the runtime characteristics of the system. Mizan then performs efficient fine-grained vertex migration to balance computation and communication. We have fully implemented Mizan; using extensive evaluation we show that---especially for highly-dynamic workloads---Mizan provides up to 84 improvement over techniques leveraging static graph pre-partitioning.",
"Extracting knowledge by performing computations on graphs is becoming increasingly challenging as graphs grow in size. A standard approach distributes the graph over a cluster of nodes, but performing computations on a distributed graph is expensive if large amount of data have to be moved. Without partitioning the graph, communication quickly becomes a limiting factor in scaling the system up. Existing graph partitioning heuristics incur high computation and communication cost on large graphs, sometimes as high as the future computation itself. Observing that the graph has to be loaded into the cluster, we ask if the partitioning can be done at the same time with a lightweight streaming algorithm. We propose natural, simple heuristics and compare their performance to hashing and METIS, a fast, offline heuristic. We show on a large collection of graph datasets that our heuristics are a significant improvement, with the best obtaining an average gain of 76 . The heuristics are scalable in the size of the graphs and the number of partitions. Using our streaming partitioning methods, we are able to speed up PageRank computations on Spark, a distributed computation system, by 18 to 39 for large social networks."
]
}
|
1309.1049
|
1621071313
|
Many real-world systems, such as social networks, rely on mining efficiently large graphs, with hundreds of millions of vertices and edges. This volume of information requires partitioning the graph across multiple nodes in a distributed system. This has a deep effect on performance, as traversing edges cut between partitions incurs a significant performance penalty due to the cost of communication. Thus, several systems in the literature have attempted to improve computational performance by enhancing graph partitioning, but they do not support another characteristic of real-world graphs: graphs are inherently dynamic, their topology evolves continuously, and subsequently the optimum partitioning also changes over time. In this work, we present the first system that dynamically repartitions massive graphs to adapt to structural changes. The system optimises graph partitioning to prevent performance degradation without using data replication. The system adopts an iterative vertex migration algorithm that relies on local information only, making complex coordination unnecessary. We show how the improvement in graph partitioning reduces execution time by over 50 , while adapting the partitioning to a large number of changes to the graph in three real-world scenarios.
|
GPS @cite_22 applies the technique most similar to ours from the heuristic point of view, but system implementation limits its application to static graphs. There are two main differences: 1) they allow vertices to move while an iteration is still running, while we move vertices between two consecutive steps; 2) to simplify location of a migrated vertex, they modify the ID. This prevents adding new elements, since their ID may conflict with one of a previously loaded and migrated vertex. We preserve de ID of the vertex by using a more complex vertex localisation mechanism, which enables near real-time changes in the topology and subsequent optimisations to increase vertex locality.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"1969970763"
],
"abstract": [
"GPS (for Graph Processing System) is a complete open-source system we developed for scalable, fault-tolerant, and easy-to-program execution of algorithms on extremely large graphs. This paper serves the dual role of describing the GPS system, and presenting techniques and experimental results for graph partitioning in distributed graph-processing systems like GPS. GPS is similar to Google's proprietary Pregel system, with three new features: (1) an extended API to make global computations more easily expressed and more efficient; (2) a dynamic repartitioning scheme that reassigns vertices to different workers during the computation, based on messaging patterns; and (3) an optimization that distributes adjacency lists of high-degree vertices across all compute nodes to improve performance. In addition to presenting the implementation of GPS and its novel features, we also present experimental results on the performance effects of both static and dynamic graph partitioning schemes, and we describe the compilation of a high-level domain-specific programming language to GPS, enabling easy expression of complex algorithms."
]
}
|
1309.1049
|
1621071313
|
Many real-world systems, such as social networks, rely on mining efficiently large graphs, with hundreds of millions of vertices and edges. This volume of information requires partitioning the graph across multiple nodes in a distributed system. This has a deep effect on performance, as traversing edges cut between partitions incurs a significant performance penalty due to the cost of communication. Thus, several systems in the literature have attempted to improve computational performance by enhancing graph partitioning, but they do not support another characteristic of real-world graphs: graphs are inherently dynamic, their topology evolves continuously, and subsequently the optimum partitioning also changes over time. In this work, we present the first system that dynamically repartitions massive graphs to adapt to structural changes. The system optimises graph partitioning to prevent performance degradation without using data replication. The system adopts an iterative vertex migration algorithm that relies on local information only, making complex coordination unnecessary. We show how the improvement in graph partitioning reduces execution time by over 50 , while adapting the partitioning to a large number of changes to the graph in three real-world scenarios.
|
Initial partitioning strategies only optimise the starting graph, with several of these techniques requiring some extent of global information. This poses significant scalability problems @cite_5 , whereas our approach relies only on local information. Additionally, as shown in Figure , these approaches cannot cope with changes that alter the structure of the elements already loaded (e.g. node and vertex deletion) or keeping optimal partitioning as the graph changes.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2145124007"
],
"abstract": [
"Partitioning graphs at scale is a key challenge for any application that involves distributing a graph across disks, machines, or data centers. Graph partitioning is a very well studied problem with a rich literature, but existing algorithms typically can not scale to billions of edges, or can not provide guarantees about partition sizes. In this work we introduce an efficient algorithm, balanced label propagation, for precisely partitioning massive graphs while greedily maximizing edge locality, the number of edges that are assigned to the same shard of a partition. By combining the computational efficiency of label propagation --- where nodes are iteratively relabeled to the same 'label' as the plurality of their graph neighbors --- with the guarantees of constrained optimization --- guiding the propagation by a linear program constraining the partition sizes --- our algorithm makes it practically possible to partition graphs with billions of edges. Our algorithm is motivated by the challenge of performing graph predictions in a distributed system. Because this requires assigning each node in a graph to a physical machine with memory limitations, it is critically necessary to ensure the resulting partition shards do not overload any single machine. We evaluate our algorithm for its partitioning performance on the Facebook social graph, and also study its performance when partitioning Facebook's 'People You May Know' service (PYMK), the distributed system responsible for the feature extraction and ranking of the friends-of-friends of all active Facebook users. In a live deployment, we observed average query times and average network traffic levels that were 50.5 and 37.1 (respectively) when compared to the previous naive random sharding."
]
}
|
1309.1049
|
1621071313
|
Many real-world systems, such as social networks, rely on mining efficiently large graphs, with hundreds of millions of vertices and edges. This volume of information requires partitioning the graph across multiple nodes in a distributed system. This has a deep effect on performance, as traversing edges cut between partitions incurs a significant performance penalty due to the cost of communication. Thus, several systems in the literature have attempted to improve computational performance by enhancing graph partitioning, but they do not support another characteristic of real-world graphs: graphs are inherently dynamic, their topology evolves continuously, and subsequently the optimum partitioning also changes over time. In this work, we present the first system that dynamically repartitions massive graphs to adapt to structural changes. The system optimises graph partitioning to prevent performance degradation without using data replication. The system adopts an iterative vertex migration algorithm that relies on local information only, making complex coordination unnecessary. We show how the improvement in graph partitioning reduces execution time by over 50 , while adapting the partitioning to a large number of changes to the graph in three real-world scenarios.
|
In addition to adapting the initial partitioning of the graph, some systems attempt to keep a small overhead when processing changing graph structures. In @cite_5 , partitioning was optimised in slowly changing graphs, with changes being applied the next time the graph was loaded. The authors employ a label propagation mechanism enhanced with geographical information to improve graph partitioning. The process involves linear programming, being computationally very expensive (reported calculations of 100 CPU days) and implying global aggregation of local (vertex-level) utility functions.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2145124007"
],
"abstract": [
"Partitioning graphs at scale is a key challenge for any application that involves distributing a graph across disks, machines, or data centers. Graph partitioning is a very well studied problem with a rich literature, but existing algorithms typically can not scale to billions of edges, or can not provide guarantees about partition sizes. In this work we introduce an efficient algorithm, balanced label propagation, for precisely partitioning massive graphs while greedily maximizing edge locality, the number of edges that are assigned to the same shard of a partition. By combining the computational efficiency of label propagation --- where nodes are iteratively relabeled to the same 'label' as the plurality of their graph neighbors --- with the guarantees of constrained optimization --- guiding the propagation by a linear program constraining the partition sizes --- our algorithm makes it practically possible to partition graphs with billions of edges. Our algorithm is motivated by the challenge of performing graph predictions in a distributed system. Because this requires assigning each node in a graph to a physical machine with memory limitations, it is critically necessary to ensure the resulting partition shards do not overload any single machine. We evaluate our algorithm for its partitioning performance on the Facebook social graph, and also study its performance when partitioning Facebook's 'People You May Know' service (PYMK), the distributed system responsible for the feature extraction and ranking of the friends-of-friends of all active Facebook users. In a live deployment, we observed average query times and average network traffic levels that were 50.5 and 37.1 (respectively) when compared to the previous naive random sharding."
]
}
|
1309.1049
|
1621071313
|
Many real-world systems, such as social networks, rely on mining efficiently large graphs, with hundreds of millions of vertices and edges. This volume of information requires partitioning the graph across multiple nodes in a distributed system. This has a deep effect on performance, as traversing edges cut between partitions incurs a significant performance penalty due to the cost of communication. Thus, several systems in the literature have attempted to improve computational performance by enhancing graph partitioning, but they do not support another characteristic of real-world graphs: graphs are inherently dynamic, their topology evolves continuously, and subsequently the optimum partitioning also changes over time. In this work, we present the first system that dynamically repartitions massive graphs to adapt to structural changes. The system optimises graph partitioning to prevent performance degradation without using data replication. The system adopts an iterative vertex migration algorithm that relies on local information only, making complex coordination unnecessary. We show how the improvement in graph partitioning reduces execution time by over 50 , while adapting the partitioning to a large number of changes to the graph in three real-world scenarios.
|
Sedge @cite_25 is a dynamic replication mechanism (as opposed to a re-partitioning one). Sedge keeps a fixed set of non overlapping partitions and then dynamically creates new ones or replicates some of them in different machines to cope with variations in workload. Replicated systems are more focused at providing low latency to multiple concurrent and short-lived queries. Our system tries to keep a few long-lasting (continuous) queries which results are modified as a consequence of changes in the information or the topology of the graph.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2063032661"
],
"abstract": [
"Searching and mining large graphs today is critical to a variety of application domains, ranging from community detection in social networks, to de novo genome sequence assembly. Scalable processing of large graphs requires careful partitioning and distribution of graphs across clusters. In this paper, we investigate the problem of managing large-scale graphs in clusters and study access characteristics of local graph queries such as breadth-first search, random walk, and SPARQL queries, which are popular in real applications. These queries exhibit strong access locality, and therefore require specific data partitioning strategies. In this work, we propose a Self Evolving Distributed Graph Management Environment (Sedge), to minimize inter-machine communication during graph query processing in multiple machines. In order to improve query response time and throughput, Sedge introduces a two-level partition management architecture with complimentary primary partitions and dynamic secondary partitions. These two kinds of partitions are able to adapt in real time to changes in query workload. (Sedge) also includes a set of workload analyzing algorithms whose time complexity is linear or sublinear to graph size. Empirical results show that it significantly improves distributed graph processing on today's commodity clusters."
]
}
|
1309.0659
|
1508956227
|
In this paper, we study how an agent's belief is affected by her neighbors in a social network. We first introduce a general framework, where every agent has an initial belief on a statement, and updates her belief according to her and her neighbors' current beliefs under some belief evolution functions, which, arguably, should satisfy some basic properties. Then, we focus on the majority rule belief evolution function, that is, an agent will (dis)believe the statement iff more than half of her neighbors (dis)believe it. We consider some fundamental issues about majority rule belief evolution, for instance, whether the belief evolution process will eventually converge. The answer is no in general. However, for random asynchronous belief evolution, this is indeed the case.
|
The majority rule evolution function is named from the same well known approach in voting system @cite_10 . Generally speaking, belief evolution can be considered as voting in a social network for two opposite candidates. However, in belief evolution, the majority rule voting is performed locally, individually, distributedly and iteratively, while in voting system, it is performed globally, wholly, contralizedly and only once.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"1590250213"
],
"abstract": [
"GLOSSARY OF MAJOR CONCEPTS 1. Introduction 2. Arrow's Impossibility Result 3. Majority Decision Under Restricted Domains 4. Individual Rights 5. Manipulability 6. Espacing Impossibilities: Social Choice Rules 7. Distributive Justice: Rawlsian and Utilitarian Rules 8. Cooperative Bargaining 9. Empirical Social Choice 10. A Few Steps Beyond"
]
}
|
1309.0659
|
1508956227
|
In this paper, we study how an agent's belief is affected by her neighbors in a social network. We first introduce a general framework, where every agent has an initial belief on a statement, and updates her belief according to her and her neighbors' current beliefs under some belief evolution functions, which, arguably, should satisfy some basic properties. Then, we focus on the majority rule belief evolution function, that is, an agent will (dis)believe the statement iff more than half of her neighbors (dis)believe it. We consider some fundamental issues about majority rule belief evolution, for instance, whether the belief evolution process will eventually converge. The answer is no in general. However, for random asynchronous belief evolution, this is indeed the case.
|
There are other related works, actually from several different disciplines. For instance, an alternative model of opinion formation is to take just a single (but not all) friend's opinion into account based on their contact frequency @cite_1 . Another interesting approach, called replicator dynamics @cite_11 , takes the historical performance of beliefs into account, and the belief that performs better in the past will be more likely replicated. However, for space reasons, we are not able to discuss all of them in details.
|
{
"cite_N": [
"@cite_1",
"@cite_11"
],
"mid": [
"2120015072",
"1567092208"
],
"abstract": [
"We provide a model to investigate the tension between information aggregation and spread of misinformation. Individuals meet pairwise and exchange information, which is modeled as both individuals adopting the average of their pre-meeting beliefs. \"Forceful\" agents influence the beliefs of (some of) the other individuals they meet, but do not change their own opinions. We characterize how the presence of forceful agents interferes with information aggregation. Under the assumption that even forceful agents obtain some information from others, we first show that all beliefs converge to a stochastic consensus. Our main results quantify the extent of misinformation by providing bounds or exact results on the gap between the consensus value and the benchmark without forceful agents (where there is efficient information aggregation). The worst outcomes obtain when there are several forceful agents who update their beliefs only on the basis of information from individuals that have been influenced by them.",
"This text offers a systematic, rigorous, and unified presentation of evolutionary game theory, covering the core developments of the theory from its inception in biology in the 1970s through recent advances. Evolutionary game theory, which studies the behavior of large populations of strategically interacting agents, is used by economists to make predictions in settings where traditional assumptions about agents' rationality and knowledge may not be justified. Recently, computer scientists, transportation scientists, engineers, and control theorists have also turned to evolutionary game theory, seeking tools for modeling dynamics in multiagent systems. Population Games and Evolutionary Dynamics provides a point of entry into the field for researchers and students in all of these disciplines. The text first considers population games, which provide a simple, powerful model for studying strategic interactions among large numbers of anonymous agents. It then studies the dynamics of behavior in these games. By introducing a general model of myopic strategy revision by individual agents, the text provides foundations for two distinct approaches to aggregate behavior dynamics: the deterministic approach, based on differential equations, and the stochastic approach, based on Markov processes. Key results on local stability, global convergence, stochastic stability, and nonconvergence are developed in detail. Ten substantial appendixes present the mathematical tools needed to work in evolutionary game theory, offering a practical introduction to the methods of dynamic modeling. Accompanying the text are more than 200 color illustrations of the mathematics and theoretical results; many were created using the Dynamo software suite, which is freely available on the author's Web site. Readers are encouraged to use Dynamo to run quick numerical experiments and to create publishable figures for their own research."
]
}
|
1309.0073
|
1505920559
|
With the increased popularity of smartphones, various security threats and privacy leakages targeting them are discovered and investigated. In this work, we present , a framework to authenticate users silently and transparently by exploiting dynamics mined from the user touch behavior biometrics and the micro-movement of the device caused by user's screen-touch actions. We build a "touch-based biometrics" model of the owner by extracting some principle features, and then verify whether the current user is the owner or guest attacker. When using the smartphone, the unique operating dynamics of the user is detected and learnt by collecting the sensor data and touch events silently. When users are mobile, the micro-movement of mobile devices caused by touch is suppressed by that due to the large scale user-movement which will render the touch-based biometrics ineffective. To address this, we integrate a movement-based biometrics for each user with previous touch-based biometrics. We conduct extensive evaluations of our approaches on the Android smartphone, we show that the user identification accuracy is over 99 .
|
Researchers also inferred keystroke on traditional keyboard based on acoustic signal @cite_24 @cite_9 , timing event observation @cite_16 , and electromagnetic waves @cite_5 . (sp)iPhone @cite_27 takes the advantage of motion sensors to detect the vibration and infer the keystroke of nearby keyboard. Both @cite_23 and @cite_3 study the possibility of identifying the password sequence by examining the smudge left on the touch screen. Besides, @cite_25 and @cite_4 propose the method to infer a user's input by observing the touch action with a camera. These works indicate that the interaction between user and device can be observed through sensors and may cause a privacy leakage.
|
{
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_3",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_5",
"@cite_16",
"@cite_25"
],
"mid": [
"",
"",
"",
"2131877534",
"2150639461",
"1626992774",
"1660981793",
"2074324486",
""
],
"abstract": [
"",
"",
"",
"We examine the problem of keyboard acoustic emanations. We present a novel attack taking as input a 10-minute sound recording of a user typing English text using a keyboard, and then recovering up to 96 of typed characters. There is no need for a labeled training recording. Moreover the recognizer bootstrapped this way can even recognize random text such as passwords: In our experiments, 90 of 5-character random passwords using only letters can be generated in fewer than 20 attempts by an adversary; 80 of 10-character passwords can be generated in fewer than 75 attempts. Our attack uses the statistical constraints of the underlying content, English language, to reconstruct text from sound recordings without any labeled training data. The attack uses a combination of standard machine learning and speech recognition techniques, including cepstrum features, Hidden Markov Models, linear classification, and feedback-based incremental learning.",
"Mobile phones are increasingly equipped with a range of highly responsive sensors. From cameras and GPS receivers to three-axis accelerometers, applications running on these devices are able to experience rich interactions with their environment. Unfortunately, some applications may be able to use such sensors to monitor their surroundings in unintended ways. In this paper, we demonstrate that an application with access to accelerometer readings on a modern mobile phone can use such information to recover text entered on a nearby keyboard. Note that unlike previous emanation recovery papers, the accelerometers on such devices sample at near the Nyquist rate, making previous techniques unworkable. Our application instead detects and decodes keystrokes by measuring the relative physical position and distance between each vibration. We then match abstracted words against candidate dictionaries and record word recovery rates as high as 80 . In so doing, we demonstrate the potential to recover significant information from the vicinity of a mobile device without gaining access to resources generally considered to be the most likely sources of leakage (e.g., microphone, camera).",
"Touch screens are an increasingly common feature on personal computing devices, especially smartphones, where size and user interface advantages accrue from consolidating multiple hardware components (keyboard, number pad, etc.) into a single software definable user interface. Oily residues, or smudges, on the touch screen surface, are one side effect of touches from which frequently used patterns such as a graphical password might be inferred. In this paper we examine the feasibility of such smudge attacks on touch screens for smartphones, and focus our analysis on the Android password pattern. We first investigate the conditions (e.g., lighting and camera orientation) under which smudges are easily extracted. In the vast majority of settings, partial or complete patterns are easily retrieved. We also emulate usage situations that interfere with pattern identification, and show that pattern smudges continue to be recognizable. Finally, we provide a preliminary analysis of applying the information learned in a smudge attack to guessing an Android password pattern.",
"Computer keyboards are often used to transmit confidential data such as passwords. Since they contain electronic components, keyboards eventually emit electromagnetic waves. These emanations could reveal sensitive information such as keystrokes. The technique generally used to detect compromising emanations is based on a wide-band receiver, tuned on a specific frequency. However, this method may not be optimal since a significant amount of information is lost during the signal acquisition. Our approach is to acquire the raw signal directly from the antenna and to process the entire captured electromagnetic spectrum. Thanks to this method, we detected four different kinds of compromising electromagnetic emanations generated by wired and wireless keyboards. These emissions lead to a full or a partial recovery of the keystrokes. We implemented these sidechannel attacks and our best practical attack fully recovered 95 of the keystrokes of a PS 2 keyboard at a distance up to 20 meters, even through walls. We tested 12 different keyboard models bought between 2001 and 2008 (PS 2, USB, wireless and laptop). They are all vulnerable to at least one of the four attacks. We conclude that most of modern computer keyboards generate compromising emanations (mainly because of the manufacturer cost pressures in the design). Hence, they are not safe to transmit confidential information.",
"Keypads are commonly used to enter personal identification numbers (PIN) which are intended to authenticate a user based on what they know. A number of those keypads such as ATM inputs and door keypads provide an audio feedback to the user for each button pressed. Such audio feedback are observable from a modest distance. We are looking at quantifying the information leaking from delays between acoustic feedback pulses. Preliminary experiments suggest that by using a Hidden Markov Model, it might be possible to substantially narrow the search space. A subsequent brute force search on the reduced search space could be possible with- out triggering alerts, lockouts or other mechanisms design to thwart plain brute force attempts.",
""
]
}
|
1309.0073
|
1505920559
|
With the increased popularity of smartphones, various security threats and privacy leakages targeting them are discovered and investigated. In this work, we present , a framework to authenticate users silently and transparently by exploiting dynamics mined from the user touch behavior biometrics and the micro-movement of the device caused by user's screen-touch actions. We build a "touch-based biometrics" model of the owner by extracting some principle features, and then verify whether the current user is the owner or guest attacker. When using the smartphone, the unique operating dynamics of the user is detected and learnt by collecting the sensor data and touch events silently. When users are mobile, the micro-movement of mobile devices caused by touch is suppressed by that due to the large scale user-movement which will render the touch-based biometrics ineffective. To address this, we integrate a movement-based biometrics for each user with previous touch-based biometrics. We conduct extensive evaluations of our approaches on the Android smartphone, we show that the user identification accuracy is over 99 .
|
Recently, there are some work address the user identification with behavior biometrics in continuous or implicit manner. In those work, identification services run in background and identify the current user in real time. For example, @cite_8 continuously authenticates users based on 30 behavioral features, including touch features and motion sensor features. In this work, the EER is approximately @math 2 @cite_20 combines motion, voice, location history and multi-touch data to identify users of smartphone, whose average error rate is $3.6 @cite_21 uses a special digital sensor glove to achieve highly accurate continuous identification. Those work use special devices or motion sensors to enrich the identification features to improve the poor accuracy with pure touch information. But they ignore the scenario that the user uses mobile phone while walking, where the micro features caused by touch is suppressed by the large scale movement which will make the accuracy of the exiting methods deteriorate.
|
{
"cite_N": [
"@cite_21",
"@cite_20",
"@cite_8"
],
"mid": [
"2535614671",
"",
"2151854612"
],
"abstract": [
"Securing the sensitive data stored and accessed from mobile devices makes user authentication a problem of paramount importance. The tension between security and usability renders however the task of user authentication on mobile devices a challenging task. This paper introduces FAST (Fingergestures Authentication System using Touchscreen), a novel touchscreen based authentication approach on mobile devices. Besides extracting touch data from touchscreen equipped smartphones, FAST complements and validates this data using a digital sensor glove that we have built using off-the-shelf components. FAST leverages state-of-the-art classification algorithms to provide transparent and continuous mobile system protection. A notable feature is FAST 's continuous, user transparent post-login authentication. We use touch data collected from 40 users to show that FAST achieves a False Accept Rate (FAR) of 4.66 and False Reject Rate of 0.13 for the continuous post-login user authentication. The low FAR and FRR values indicate that FAST provides excellent post-login access security, without disturbing the honest mobile users.",
"",
"We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0 for intrasession authentication, 2 -3 for intersession authentication, and below 4 when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multimodal biometric authentication system."
]
}
|
1309.0717
|
2017209227
|
We develop a polynomial translation from finite control pi-calculus processes to safe low-level Petri nets. To our knowledge, this is the first such translation. It is natural in that there is a close correspondence between the control flows, enjoys a bisimulation result, and is suitable for practical model checking.
|
There are two main approaches to FCP verification. The first is to directly generate the state space of the model, as is done (on-the-fly) by the Mobility Workbench (MWB) @cite_4 . This approach is relatively straightforward but has a number of disadvantages. In particular, its scalability is poor due to the complexity of the semantics, which restricts the use of heuristics for pruning the state space, and due to the need for expensive operations (like equivalence checks @cite_7 ) every time a new state is generated. Furthermore, some efficient model checking techniques like symbolic representations of state spaces are difficult to apply.
|
{
"cite_N": [
"@cite_4",
"@cite_7"
],
"mid": [
"1955679363",
"2120995986"
],
"abstract": [
"In this paper we describe the first prototype version of the Mobility Workbench (MWB), an automated tool for manipulating and analyzing mobile concurrent systems (those with evolving connectivity s ...",
"We show that the problems of checking pi-Calculus structural congruence (piSC) and graph isomorphism (GI) are Karp reducible to each other. The reduction from GI to piSC is given explicitly, and the reduction in the opposite direction proceeds by transforming piSC into an instance of the term equality problem (i.e. the problem of deciding equivalence of two terms in the presence of associative and or commutative operations and commutative variable-binding quantifiers), which is known to be Karp reducible to GI. Our result is robust in the sense that it holds for several variants of structural congruence and some rather restrictive fragments of pi-Calculus.Furthermore, we address the question of solving piSC in practice, and describe a number of optimisations exploiting specific features of pi-Calculus terms, which allow one to significantly reduce the size of the resulting graphs that have to be checked for isomorphism."
]
}
|
1309.0717
|
2017209227
|
We develop a polynomial translation from finite control pi-calculus processes to safe low-level Petri nets. To our knowledge, this is the first such translation. It is natural in that there is a close correspondence between the control flows, enjoys a bisimulation result, and is suitable for practical model checking.
|
Although several translations of to Petri nets have been proposed, none of them provides a polynomial translation of FCPs to safe PNs. The verification kit HAL @cite_8 translates a model into a --- a finite automaton where states are labelled by sets of names that represent restrictions @cite_14 @cite_23 . For model checking, these automata are further translated to finite automata @cite_8 . Like in our approach, the idea is to replace restrictions with fresh names, but the translation stores full substitutions, which may yield an exponential blow up of the finite automaton. Our translation avoids this blow up by compactly representing substitutions by PN markings. This, however, needs careful substitution manipulation and reference counting.
|
{
"cite_N": [
"@cite_14",
"@cite_23",
"@cite_8"
],
"mid": [
"1839078145",
"",
"2070368511"
],
"abstract": [
"In this paper we associate to every π-calculus agent an irredundant unfolding, i.e., a labeled transition system equipped with the ordinary notion of strong bisimilarity, so that agents are mapped into strongly bisimilar unfoldings if and only if they are early strongly bisimilar. For a class of finitary agents (that strictly contains the finite control agents) without matching, the corresponding unfoldings are finite and can be built efficiently. The main consequence of the results presented in the paper is that the irredundant unfolding can be constructed also for a single agent, and then a minimal realization can be derived from it employing the ordinary partition refinement algorithm. Instead, according toprevious results only pairs of π-calculus agents could be unfolded and tested for bisimilarity, and no minimization of a single agent was possible. Another consequence is the improvement of the complexity bound for checking bisimilarity of finitary agents without matching.",
"",
"This article presents a semantic-based environment for reasoning about the behavior of mobile systems. The verification environment, called HAL, exploits a novel automata-like model that allows finite-state verification of systems specified in the π-calculus. The HAL system is able to interface with several efficient toolkits (e.g. model-checkers) to determine whether or not certain properties hold for a given specification. We report experimental results on some case studies."
]
}
|
1309.0717
|
2017209227
|
We develop a polynomial translation from finite control pi-calculus processes to safe low-level Petri nets. To our knowledge, this is the first such translation. It is natural in that there is a close correspondence between the control flows, enjoys a bisimulation result, and is suitable for practical model checking.
|
Amadio and Meyssonnier @cite_20 replace unused restricted names by generic free names. Their translation instantiates substitutions, @math is represented by @math . This creates an exponential blow up: since the substitutions change over time, @math public names and @math variables may yield @math instantiated terms. Moreover, since the number of processes to be modified by replacement is not bounded in @cite_20 , Amadio and Meyssonnier use PNs with transfer. (Their translation handles a subset of incomparable with FCPs.) As this paper shows, transfer nets are an unnecessarily powerful target formalism for FCPs --- reachability is undecidable in such nets @cite_11 .
|
{
"cite_N": [
"@cite_20",
"@cite_11"
],
"mid": [
"1539680626",
"1513999490"
],
"abstract": [
"We study the decidability of the control reachability problem for various fragments of the asynchronous π-calculus. We consider the combination of three main features: name generation, name mobility, and unbounded control. We show that the combination of name generation with either name mobility or unbounded control leads to an undecidable fragment. On the other hand, we prove that name generation with unique receiver and bounded input (a condition weaker than bounded control) is decidable by reduction to the coverability problem for Petri nets with transfer (and back).",
"We study Petri nets with Reset arcs (also Transfer and Doubling arcs) in combination with other extensions of the basic Petri net model. While Reachability is undecidable in all these extensions (indeed they are Turing-powerful), we exhibit unexpected frontiers for the decidability of Termination, Coverability, Boundedness and place-Boundedness. In particular, we show counter-intuitive separations between seemingly related problems. Our main theorem is the very surprising fact that boundedness is undecidable for Petri nets with Reset arcs."
]
}
|
1309.0717
|
2017209227
|
We develop a polynomial translation from finite control pi-calculus processes to safe low-level Petri nets. To our knowledge, this is the first such translation. It is natural in that there is a close correspondence between the control flows, enjoys a bisimulation result, and is suitable for practical model checking.
|
Peschanski, Klaudel and Devillers @cite_3 translate @math -graphs (a graphical variant of ) into high-level PNs. The technique works on a fragment that is equivalent to FCPs. However, the target formalism is unnecessarily powerful, and the paper provides no experimental evaluation.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2172422010"
],
"abstract": [
"We present a Petri net interpretation of the pi-graphs - a graphical variant of the pi-calculus. Characterizing labelled transition systems, the translation can be used to reason in Petri net terms about open reconfigurable systems. We demonstrate that the pi-graphs and their translated Petri nets agree at the semantic level. In consequence, existing results on pi-graphs naturally extend to the translated Petri nets, most notably a guarantee of finiteness by construction. © 2011 Springer-Verlag."
]
}
|
1309.0717
|
2017209227
|
We develop a polynomial translation from finite control pi-calculus processes to safe low-level Petri nets. To our knowledge, this is the first such translation. It is natural in that there is a close correspondence between the control flows, enjoys a bisimulation result, and is suitable for practical model checking.
|
Our earlier translation @cite_17 identifies groups of processes that share restricted names. @cite_16 , we modify it to generate safe low-level PNs, and use an unfolding-based model checker. The experiments indicate that this technique is more scalable than the ones above, and it has the advantage of generating low-level rather than high-level PNs. However, the PN may still be exponentially large.
|
{
"cite_N": [
"@cite_16",
"@cite_17"
],
"mid": [
"2109613468",
"2091676842"
],
"abstract": [
"We propose a technique for verification of mobile systems. We translate finite control processes, a well-known subset of π-Calculus, into Petri nets, which are subsequently used formodel checking. This translation always yields bounded Petri nets with a small bound, and we develop a technique for computing a non-trivial bound by static analysis. Moreover, we introduce the notion of safe processes, a subset of finite control processes, for which our translation yields safe Petri nets, and show that every finite control process can be translated into a safe one of at most quadratic size. This gives a possibility to translate every finite control process into a safe Petri net, for which efficient unfolding-based verification is possible. Our experiments show that this approach has a significant advantage over other existing tools for verification of mobile systems in terms of memory consumption and runtime. We also demonstrate the applicability of our method on a realistic model of an automated manufacturing system.",
"Automata-theoretic representations have proven useful in the automatic and exact analysis of computing systems. We propose a new semantical mapping of π-Calculus processes into place transition Petri nets. Our translation exploits the connections created by restricted names and can yield finite nets even for processes with unbounded name and unbounded process creation. The property of structural stationarity characterises the processes mapped to finite nets. We provide exact conditions for structural stationarity using novel characteristic functions. As application of the theory, we identify a rich syntactic class of structurally stationary processes, called finite handler processes. Our Petri net translation facilitates the automatic verification of a case study modelled in this class."
]
}
|
1308.6413
|
1900194038
|
Service orientation fosters a high-level model for distributed applications development, which is based on the discovery, composition and reuse of existing software services. However, the heterogeneity among current service-oriented technologies renders the important task of service discovery tedious and ineffective. This dissertation proposes a new approach to address this challenge. Specifically, it contributes a framework supporting the unified discovery of heterogeneous services, with a focus on web, peer-to-peer, and grid services. The framework comprises a service query language and its enacting service discovery engine. Overall, the proposed solution is characterized by generality and flexibility, which are ensured by appropriate abstractions, extension points, and their sup- porting mechanisms. The viability, performance, and effectiveness of the proposed framework are demonstrated by experimental measurements.
|
Most of the currently existing languages for service discovery fail to address the heterogeneity in service descriptions and service discovery mechanisms. In @cite_13 @cite_9 , the authors propose the use of XQuery to express queries over service descriptions that are constructed according to specific schemas. Similarly, in @cite_17 @cite_24 , service queries can only be evaluated against WSDL documents. In an effort to address the need for multi-dimensional query formulation, the approach proposed in @cite_12 defines certain extensions to the UDDI specification, still queries are only compliant with registries of that type. Also, numerous approaches that have adopted existing semantic service description languages to express service queries, as for example in @cite_22 , are naturally confined to limited sets of available services.
|
{
"cite_N": [
"@cite_22",
"@cite_9",
"@cite_24",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"1602713645",
"2127096783",
"1973711420",
"2102666972",
"2075106684",
"2007495022"
],
"abstract": [
"The increasing availability of web services demands for a discovery mechanism to find services that satisfy our requirement. UDDI provides a web wide registry of web services, but its lack of an explicit capability representation and its syntax based search provided produces results that are coarse in nature. We propose to base the discovery mechanism on OWL-S. OWL-S allows us to semantically describe web services in terms of capabilities offered and to perform logic inference to match the capabilities requested with the capabilities offered. We propose OWL-S UDDI matchmaker that combines the better of two technologies. We also implemented and analyzed its performance.",
"The requirements of service discovery are spread everywhere as service discovery is the prerequisite of resource sharing, data integration and process collaboration in the network environments. In this paper, a unified service discovery framework Service CatalogNet is presented to cover many application scenarios including grid environment. We introduce the service description model in Service CatalogNet which can describe non-functional features by dynamic contexts and is supported by the semantic query engine. We also propose the unified service discovery mechanism and give the evaluations of the algorithm in terms of number of messages and query time.",
"Web services have acquired enormous popularity among software developers. This popularity has motivated developers to publish a large number of Web service descriptions in UDDI registries. Although these registries provide search facilities, they are still rather difficult to use and often require service consumers to spend too much time manually browsing and selecting service descriptions. This paper presents a novel search method for Web services called WSQBE that aims at both easing query specification and assisting discoverers by returning a short and accurate list of candidate services. In contrast with previous approaches, WSQBE discovery process is based on an automatic search space reduction mechanism that makes this approach more efficient. Empirical evaluations of WSQBE search space reduction mechanism, retrieval performance, processing time and memory usage, using a registry with 391 service descriptions, are presented.",
"In this paper, we propose the Web Service Discovery Architecture (WSDA). At runtime, Grid applications can use this architecture to discover and adapt to remote services. WSDA promotes an interoperable web service discovery layer by defining appropriate services, interfaces, operations and protocol bindings, based on industry standards. It is unified because it subsumes an array of disparate concepts, interfaces and protocols under a single semi-transparent umbrella. It is modular because it defines a small set of orthogonal multi-purpose communication primitives (building blocks) for discovery. These primitives cover service identification, service description retrieval, data publication as well as minimal and powerful query support. The architecture is open and flexible because each primitive can be used, implemented, customized and extended in many ways. It is powerful because the individual primitives can be combined and plugged together by specific clients and services to yield a wide range of behaviors and emerging synergies.",
"Web services technology has generated a lot interest, but its adoption rate has been slow. This paper discusses issues related to this slow take up and argues that quality of services is one of the contributing factors. The paper proposes a new Web services discovery model in which the functional and non-functional requirements (i.e. quality of services) are taken into account for the service discovery. The proposed model should give Web services consumers some confidence about the quality of service of the discovered Web services.",
"The web-services stack of standards is designed to support the reuse and interoperation of software components on the web. A critical step in the process of developing applications based on web services is service discovery, i.e. the identification of existing web services that can potentially be used in the context of a new web application. Discovery through catalog-style browsing (such as supported currently by web-service registries) is clearly insufficient. To support programmatic service discovery, we have developed a suite of methods that assess the similarity between two WSDL (Web Service Description Language) specifications based on the structure of their data types and operations and the semantics of their natural language descriptions and identifiers. Given only a textual description of the desired service, a semantic information-retrieval method can be used to identify and order the most relevant WSDL specifications based on the similarity of the element descriptions of the available specifications with the query. If a (potentially partial) specification of the desired service behavior is also available, this set of likely candidates can be further refined by a semantic structure-matching step, assessing the structural similarity of the desired vs the retrieved services and the semantic similarity of their identifiers. In this paper, we describe and experimentally evaluate our suite of service-similarity assessment methods."
]
}
|
1308.6413
|
1900194038
|
Service orientation fosters a high-level model for distributed applications development, which is based on the discovery, composition and reuse of existing software services. However, the heterogeneity among current service-oriented technologies renders the important task of service discovery tedious and ineffective. This dissertation proposes a new approach to address this challenge. Specifically, it contributes a framework supporting the unified discovery of heterogeneous services, with a focus on web, peer-to-peer, and grid services. The framework comprises a service query language and its enacting service discovery engine. Overall, the proposed solution is characterized by generality and flexibility, which are ensured by appropriate abstractions, extension points, and their sup- porting mechanisms. The viability, performance, and effectiveness of the proposed framework are demonstrated by experimental measurements.
|
Many service discovery frameworks have also embraced semantic web technologies to improve precision and recall of the matchmaking process, as it is reported in @cite_30 @cite_7 @cite_0 . However, such approaches mainly focus on support for the evaluation of functional search criteria, they are constrained to specific types of service brokers, and generally lack flexibility. Other efforts have tackled the challenge of multi-dimensional query evaluation @cite_5 @cite_4 , or the heterogeneity in service discovery mechanisms @cite_26 @cite_31 . Even though the proposed system architectures are characterized by flexibility, they exclusively support the discovery of web services and thus their solution is not applicable to other types of services.
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_7",
"@cite_0",
"@cite_5",
"@cite_31"
],
"mid": [
"1480041733",
"1810232998",
"1543328822",
"2092330630",
"2139729187",
"1986375222",
"2140748359"
],
"abstract": [
"The ubiquitous computing vision is to make knowledge and services easily available in our everyday environments. A wide range of devices, applications and services can be interconnected to provide intelligent and automatic systems that make our lives more enjoyable and our workplaces more efficient. Interaction typically is to be between peers rather than clients and servers. In this context, the JXTA peer-to-peer infrastructure, designed for interoperability, platform independence and ubiquity, is a suitable foundation to build future computer systems on. Peers need ways to effortlessly discover, consume and provide services, and to take advantage of new services as they become available in a dynamically changing network. However, JXTA does not currently handle this servicediscovery problem. In this paper, we examine several service-discovery architectures, to see whether they can be adapted to JXTA. We conclude that none of them adequately support the flexibility and expressiveness that ubiquitous computing requires. We therefore argue that Web Ontology Language (OWL) and OWL Services (OWL-S) ontologies should be used to express detailed semantic information about services, devices and other service-discovery concepts. This kind of approach allows peers to reason about service offerings and achieve intelligent service discovery by using an inference engine. We present an experimental implementation of this ontological approach to service-discovery, called Oden (Ontology-based Discovery-",
"This paper presents an innovative approach for the publication and discovery of Web services. The proposal is based on two previous works: DIRE (DIstributed REgistry), for the user-centered distributed replication of service-related information, and URBE (UDDI Registry By Example), for the semantic-aware match making between requests and available services. The integrated view also exploits USQL (Unified Service Query Language) to provide users with a higher level and homogeneous means to interact with the different registries. The proposal improves background technology in different ways: we integrate USQL as high-level language to state service requests, widen user notifications based on URBE semantic matching, and apply URBE match making to all the facets with which services can be described in DIRE. All these new concepts are demonstrated on a simple scenario.",
"We extend the service abstraction in the Open Grid Services Architecture ogsa for Quality of Service (QoS) properties. The realization of QoS often requires mechanisms such as advance or on-demand reservation of resources, varying in type and implementation, and independently controlled and monitored. propose the GARA FostKessl99 architecture. The GARA library provides a restricted representation scheme for encoding resource properties and the associated monitoring of Service Level Agreements (SLAs). Our focus is on the application layer, whereby a given service may indicate the QoS properties it can offer, or where a service may search for other services based on particular QoS properties.",
"The potential of a large-scale growth of private and semi-private registries is creating the need for an infrastructure that can support discovery and publication over a group of autonomous registries. Recent versions of UDDI have made changes to accommodate interactions between distributed registries. In this paper, we discuss an ontology-based Web service discovery infrastructure (METEOR-S Web Service Discovery Infrastructure), to provide access to registries that are divided based on business domains and grouped into federations. In addition, we discuss how Web service discovery is carried out within a federation. We provide a novel discovery algorithm, which addresses semantic heterogeneity with respect to multiple ontologies from the same domain. We also show through preliminary results of our empirical evaluation that even when services are annotated with different ontologies, our algorithm is able to find good matches and eliminate false matches by considering the context and the coverage information of the annotated concepts.",
"The increasing availability of web services necessitates efficient discovery and execution framework. The use of xml at various levels of web services standards poses challenges to the above process. OWL-S is a service ontology and language, whose semantics are based on OWL. The semantics provided by OWL support greater automation of service selection, invocation, translation of message content between heterogeneous services, and service composition. The development and consumption of an OWL-S based web service is time consuming and error prone. OWL-S IDE assists developers in the semantic web service development, deployment and consumption processes. In order to achieve this the OWL-S IDE uses and extends existing web service tools. In this paper we will look in detail at the support for discovery for semantic web services. We also present the matching schemes, the implementation and the results of performance evaluation.",
"Service discovery has been recognized as an important aspect in the development of service-centric systems, i.e., software systems which deploy Web services. To develop such systems, it is necessary to identify services that can be combined in order to fulfill the functionality and achieve quality criteria of the system being developed. In this paper, we present a framework supporting architecture-driven service discovery (ASD)—that is the discovery of services that can provide functionalities and satisfy properties and constraints of systems as specified during the design phase of the development lifecycle based on detailed system design models. Our framework assumes an iterative design process and allows for the (re-)formulation of design models of service-centric systems based on the discovered services. The framework is composed of a query extractor, which derives queries from behavioral and structural UML design models of service-centric systems, and a query execution engine that executes these queries against service registries based on graph matching techniques. The article describes a prototype tool that we have developed to demonstrate and evaluate our framework and the results of a set of preliminary experiments that we have conducted to evaluate it.",
"The challenge of publishing and discovering Web services has recently received lots of attention. Various solutions to this problem have been proposed which, apart from their offered advantages, suffer the following disadvantages: (i) most of them are syntactic-based, leading to poor precision and recall, (ii) they are not scalable to large numbers of services, and (iii) they are incompatible, thus yielding in cumbersome service publication and discovery. This article presents the principles, the functionality, and the design of PYRAMID-S which addresses these disadvantages by providing a scalable framework for unified publication and discovery of semantically enhanced services over heterogeneous registries. PYRAMID-S uses a hybrid peer-to-peer topology to organize Web service registries based on domains. In such a topology, each Registry retains its autonomy, meaning that it can use the publication and discovery mechanisms as well as the ontology of its choice. The viability of this approach is demonstrated through the implementation and experimental analysis of a prototype."
]
}
|
1308.6273
|
2119385818
|
In sparse recovery we are given a matrix @math (the dictionary) and a vector of the form @math where @math is sparse, and the goal is to recover @math . This is a central notion in signal processing, statistics and machine learning. But in applications such as sparse coding, edge detection, compression and super resolution, the dictionary @math is unknown and has to be learned from random examples of the form @math where @math is drawn from an appropriate distribution --- this is the dictionary learning problem. In most settings, @math is overcomplete: it has more columns than rows. This paper presents a polynomial-time algorithm for learning overcomplete dictionaries; the only previously known algorithm with provable guarantees is the recent work of Spielman, Wang and Wright who gave an algorithm for the full-rank case, which is rarely the case in applications. Our algorithm applies to incoherent dictionaries which have been a central object of study since they were introduced in seminal work of Donoho and Huo. In particular, a dictionary is @math -incoherent if each pair of columns has inner product at most @math . The algorithm makes natural stochastic assumptions about the unknown sparse vector @math , which can contain @math non-zero entries (for any @math ). This is close to the best @math allowable by the best sparse recovery algorithms even if one knows the dictionary @math exactly. Moreover, both the running time and sample complexity depend on @math , where @math is the target accuracy, and so our algorithms converge very quickly to the true dictionary. Our algorithm can also tolerate substantial amounts of noise provided it is incoherent with respect to the dictionary (e.g., Gaussian). In the noisy setting, our running time and sample complexity depend polynomially on @math , and this is necessary.
|
Dictionary learning is solved in practice by variants of alternating minimization . @cite_25 gave the first approach; subsequent popular approaches include the method of optimal directions (MOD) of @cite_4 , and K-SVD of @cite_5 . The general idea is to maintain a guess for @math and @math and at every step either update @math (using basis pursuit) or update @math by, say, solving a least squares problem. Provable guarantees for such algorithms have proved difficult because the initial guesses may be very far from the true dictionary, causing basis pursuit to behave erratically. Also, the algorithms could converge to a dictionary that is not incoherent, and thus unusable for sparse recovery. (In practice, these heuristics do often work.)
|
{
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_25"
],
"mid": [
"2160547390",
"2115429828",
"2140499889"
],
"abstract": [
"In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data",
"A frame design technique for use with vector selection algorithms, for example matching pursuits (MP), is presented. The design algorithm is iterative and requires a training set of signal vectors. The algorithm, called method of optimal directions (MOD), is an improvement of the algorithm presented by Engan, Aase and Husoy see (Proc. ICASSP '98, Seattle, USA, p.1817-20, 1998). The MOD is applied to speech and electrocardiogram (ECG) signals, and the designed frames are tested on signals outside the training sets. Experiments demonstrate that the approximation capabilities, in terms of mean squared error (MSE), of the optimized frames are significantly better than those obtained using frames designed by the algorithm of Engan et. al. Experiments show typical reduction in MSE by 20-50 .",
"In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures."
]
}
|
1308.6273
|
2119385818
|
In sparse recovery we are given a matrix @math (the dictionary) and a vector of the form @math where @math is sparse, and the goal is to recover @math . This is a central notion in signal processing, statistics and machine learning. But in applications such as sparse coding, edge detection, compression and super resolution, the dictionary @math is unknown and has to be learned from random examples of the form @math where @math is drawn from an appropriate distribution --- this is the dictionary learning problem. In most settings, @math is overcomplete: it has more columns than rows. This paper presents a polynomial-time algorithm for learning overcomplete dictionaries; the only previously known algorithm with provable guarantees is the recent work of Spielman, Wang and Wright who gave an algorithm for the full-rank case, which is rarely the case in applications. Our algorithm applies to incoherent dictionaries which have been a central object of study since they were introduced in seminal work of Donoho and Huo. In particular, a dictionary is @math -incoherent if each pair of columns has inner product at most @math . The algorithm makes natural stochastic assumptions about the unknown sparse vector @math , which can contain @math non-zero entries (for any @math ). This is close to the best @math allowable by the best sparse recovery algorithms even if one knows the dictionary @math exactly. Moreover, both the running time and sample complexity depend on @math , where @math is the target accuracy, and so our algorithms converge very quickly to the true dictionary. Our algorithm can also tolerate substantial amounts of noise provided it is incoherent with respect to the dictionary (e.g., Gaussian). In the noisy setting, our running time and sample complexity depend polynomially on @math , and this is necessary.
|
After this work, @cite_20 give an quasi-polynomial time algorithm for dictionary learning using sum-of-squares SDP hierarchy. The algorithm can output an approximate dictionary even when sparsity is almost linear in the dimensions with weaker assumptions.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"1974088667"
],
"abstract": [
"The question of polynomial learn ability of probability distributions, particularly Gaussian mixture distributions, has recently received significant attention in theoretical computer science and machine learning. However, despite major progress, the general question of polynomial learn ability of Gaussian mixture distributions still remained open. The current work resolves the question of polynomial learn ability for Gaussian mixtures in high dimension with an arbitrary fixed number of components. Specifically, we show that parameters of a Gaussian mixture distribution with fixed number of components can be learned using a sample whose size is polynomial in dimension and all other parameters. The result on learning Gaussian mixtures relies on an analysis of distributions belonging to what we call “polynomial families” in low dimension. These families are characterized by their moments being polynomial in parameters and include almost all common probability distributions as well as their mixtures and products. Using tools from real algebraic geometry, we show that parameters of any distribution belonging to such a family can be learned in polynomial time and using a polynomial number of sample points. The result on learning polynomial families is quite general and is of independent interest. To estimate parameters of a Gaussian mixture distribution in high dimensions, we provide a deterministic algorithm for dimensionality reduction. This allows us to reduce learning a high-dimensional mixture to a polynomial number of parameter estimations in low dimension. Combining this reduction with the results on polynomial families yields our result on learning arbitrary Gaussian mixtures in high dimensions."
]
}
|
1308.6273
|
2119385818
|
In sparse recovery we are given a matrix @math (the dictionary) and a vector of the form @math where @math is sparse, and the goal is to recover @math . This is a central notion in signal processing, statistics and machine learning. But in applications such as sparse coding, edge detection, compression and super resolution, the dictionary @math is unknown and has to be learned from random examples of the form @math where @math is drawn from an appropriate distribution --- this is the dictionary learning problem. In most settings, @math is overcomplete: it has more columns than rows. This paper presents a polynomial-time algorithm for learning overcomplete dictionaries; the only previously known algorithm with provable guarantees is the recent work of Spielman, Wang and Wright who gave an algorithm for the full-rank case, which is rarely the case in applications. Our algorithm applies to incoherent dictionaries which have been a central object of study since they were introduced in seminal work of Donoho and Huo. In particular, a dictionary is @math -incoherent if each pair of columns has inner product at most @math . The algorithm makes natural stochastic assumptions about the unknown sparse vector @math , which can contain @math non-zero entries (for any @math ). This is close to the best @math allowable by the best sparse recovery algorithms even if one knows the dictionary @math exactly. Moreover, both the running time and sample complexity depend on @math , where @math is the target accuracy, and so our algorithms converge very quickly to the true dictionary. Our algorithm can also tolerate substantial amounts of noise provided it is incoherent with respect to the dictionary (e.g., Gaussian). In the noisy setting, our running time and sample complexity depend polynomially on @math , and this is necessary.
|
When the entries of @math are independent, algorithms for independent component analysis or ICA can recover @math . @cite_12 gave a provable algorithm that recovers @math up to arbitrary accuracy, provided entries in @math are non-Gaussian (when @math is Gaussian, @math is only determined up to rotations anyway). Subsequent works considered the overcomplete case and gave provable algorithms even when @math is @math with @math .
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2096295738"
],
"abstract": [
"We present a polynomial time algorithm to learn (in Valiant's PAC model) an arbitrarily oriented cube in n-space, given uniformly distributed sample points from it. In fact, we solve the more general problem of learning, in polynomial time, a linear (affine) transformation of a product distribution."
]
}
|
1308.6003
|
1666036116
|
The Gripon-Berrou neural network (GBNN) is a recently invented recurrent neural network embracing a LDPC-like sparse encoding setup which makes it extremely resilient to noise and errors. A natural use of GBNN is as an associative memory. There are two activation rules for the neuron dynamics, namely sum-of-sum and sum-of-max. The latter outperforms the former in terms of retrieval rate by a huge margin. In prior discussions and experiments, it is believed that although sum-of-sum may lead the network to oscillate, sum-of-max always converges to an ensemble of neuron cliques corresponding to previously stored patterns. However, this is not entirely correct. In fact, sum-of-max often converges to bogus fixed points where the ensemble only comprises a small subset of the converged state. By taking advantage of this overlooked fact, we can greatly improve the retrieval rate. We discuss this particular issue and propose a number of heuristics to push sum-of-max beyond these bogus fixed points. To tackle the problem directly and completely, a novel post-processing algorithm is also developed and customized to the structure of GBNN. Experimental results show that the new algorithm achieves a huge performance boost in terms of both retrieval rate and run-time, compared to the standard sum-of-max and all the other heuristics.
|
There are three important concepts to describe the quality of an associative memory: diversity (the number of paired patterns that the network can store), capacity (the maximum amount of stored information in bits) and efficiency (the ratio between the capacity and the amount of information that the network can store when the diversity reaches its maximum). @cite_9 , Gripon and Berrou have shown that, given the same amount of storage, GBNN outperforms the conventional Hopfield network in all of them, while decreasing the retrieval error rate. The initial retrieval rule used in @cite_9 was . Later in @cite_28 , the same authors also interpret GBNN using the formalism of error correcting codes, and propose a second retrieval rule which further decreases the error rate. We will discuss the mechanics of both rules in . Jiang al @cite_23 modify GBNN to learn long sequences by incorporating directed links. Aliabadi al @cite_15 extend GBNN to learn sparse messages.
|
{
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_23",
"@cite_15"
],
"mid": [
"2086523057",
"2121160181",
"2611060325",
"2086205707"
],
"abstract": [
"A new family of sparse neural networks achieving nearly optimal performance has been recently introduced. In these networks, messages are stored as cliques in clustered graphs. In this paper, we interpret these networks using the formalism of error correcting codes. To achieve this, we introduce two original codes, the thrifty code and the clique code, that are both sub-families of binary constant weight codes. We also provide the networks with an enhanced retrieving rule that enables a property of answer correctness and that improves performance.",
"Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages that are much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory.",
"An original architecture of oriented sparse neural networks that enables the introduction of sequentiality in associative memories is proposed in this paper. This architecture can be regarded as a generalization of a recently proposed non oriented binary network based on cliques. Using a limited neuron resource, the network is able to learn very long sequences and to retrieve them only from the knowledge of some consecutive symbols.",
"An extension to a recently introduced binary neural network is proposed to allow the storage of sparse messages, in large numbers and with high memory efficiency. This new network is justified both in biological and informational terms. The storage and retrieval rules are detailed and illustrated by various simulation results."
]
}
|
1308.6003
|
1666036116
|
The Gripon-Berrou neural network (GBNN) is a recently invented recurrent neural network embracing a LDPC-like sparse encoding setup which makes it extremely resilient to noise and errors. A natural use of GBNN is as an associative memory. There are two activation rules for the neuron dynamics, namely sum-of-sum and sum-of-max. The latter outperforms the former in terms of retrieval rate by a huge margin. In prior discussions and experiments, it is believed that although sum-of-sum may lead the network to oscillate, sum-of-max always converges to an ensemble of neuron cliques corresponding to previously stored patterns. However, this is not entirely correct. In fact, sum-of-max often converges to bogus fixed points where the ensemble only comprises a small subset of the converged state. By taking advantage of this overlooked fact, we can greatly improve the retrieval rate. We discuss this particular issue and propose a number of heuristics to push sum-of-max beyond these bogus fixed points. To tackle the problem directly and completely, a novel post-processing algorithm is also developed and customized to the structure of GBNN. Experimental results show that the new algorithm achieves a huge performance boost in terms of both retrieval rate and run-time, compared to the standard sum-of-max and all the other heuristics.
|
Another line of research focuses on efficient implementations of GBNNs. Jarollahi al demonstrate a proof-of-concept implementation of using (FPGA) in @cite_14 , though the network size is constrained to 400 neurons due to hardware limitations. The same authors implement in @cite_4 and runs 1.9 @math faster than @cite_14 , since bitwise operations are used in place of a resource-demanding module required by . @cite_0 , the same group of authors also develop a content addressable memory using GBNNs which saves 90 al @cite_10 implement an analog version of the network which consumes @math less energy but is @math more efficient both in the surface of the circuit and speed, compared with an equivalent digital circuit. However, the network size is further constrained to @math neurons in total. After analyzing the convergence and computation properties of both and , Yao al @cite_29 propose a hybrid scheme and successfully implement GBNNs on a GPU. An acceleration of 900 @math is witnessed without any loss of accuracy.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_29",
"@cite_0",
"@cite_10"
],
"mid": [
"1977602574",
"",
"1553143087",
"2055611966",
"2079849934"
],
"abstract": [
"Associative memories are alternatives to indexed memories that when implemented in hardware can benefit many applications such as data mining. The classical neural network based methodology is impractical to implement since in order to increase the size of the memory, the number of information bits stored per memory bit (efficiency) approaches zero. In addition, the length of a message to be stored and retrieved needs to be the same size as the number of nodes in the network causing the total number of messages the network is capable of storing (diversity) to be limited. Recently, a novel algorithm based on sparse clustered neural networks has been proposed that achieves nearly optimal efficiency and large diversity. In this paper, a proof-of-concept hardware implementation of these networks is presented. The limitations and possible future research areas are discussed.",
"",
"Associative memories store content in such a way that the content can be later retrieved by presenting the memory with a small portion of the content, rather than presenting the memory with an address as in more traditional memories. Associative memories are used as building blocks for algorithms within database engines, anomaly detection systems, compression algorithms, and face recognition systems. A classical example of an associative memory is the Hopfield neural network. Recently, Gripon and Berrou have introduced an alternative construction which builds on ideas from the theory of error correcting codes and which greatly outperforms the Hopfield network in capacity, diversity, and efficiency. In this paper we implement a variation of the Gripon-Berrou associative memory on a general purpose graphical processing unit (GPU). The work of Gripon and Berrou proposes two retrieval rules, sum-of-sum and sum-of-max. The sum-of-sum rule uses only matrix-vector multiplication and is easily implemented on the GPU. The sum-of-max rule is much less straightforward to implement because it involves non-linear operations. However, the sum-of-max rule gives significantly better retrieval error rates. We propose a hybrid rule tailored for implementation on a GPU which achieves a 880-fold speedup without sacrificing any accuracy.",
"A low-power Content-Addressable Memory (CAM) is introduced employing a new mechanism for associativity between the input tags and the corresponding address of the output data. The proposed architecture is based on a recently developed clustered-sparse network using binary-weighted connections that on-average will eliminate most of the parallel comparisons performed during a search. Therefore, the dynamic energy consumption of the proposed design is significantly lower compared to that of a conventional low-power CAM design. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match. A 0.13μm CMOS technology was used for simulation purposes. The energy consumption and the search delay of the proposed design are 9.5 , and 30.4 of that of the conventional NAND architecture respectively with a 3.4 higher number of transistors.",
"Encoded neural networks mix the principles of associative memories and error-correcting decoders. Their storage capacity has been shown to be much larger than Hopfield Neural Networks'. This paper introduces an analog implementation of this new type of network. The proposed circuit has been designed for the 1V supply ST CMOS 65nm process. It consumes 1165 times less energy than a digital equivalent circuit while being 2.7 times more efficient in terms of combined speed and surface."
]
}
|
1308.6075
|
2157516452
|
Scaling phenomena have been intensively studied during the past decade in the context of complex networks. As part of these works, recently novel methods have appeared to measure the dimension of abstract and spatially embedded networks. In this paper we propose a new dimension measurement method for networks, which does not require global knowledge on the embedding of the nodes, instead it exploits link-wise information (link lengths, link delays or other physical quantities). Our method can be regarded as a generalization of the spectral dimension, that grasps the network’s large-scale structure through local observations made by a random walker while traversing the links. We apply the presented method to synthetic and real-world networks, including road maps, the Internet infrastructure and the Gowalla geosocial network. We analyze the theoretically and empirically designated case when the length distribution of the links has the form P(ρ)∼1 ρ. We show that while previous dimension concepts are not applicable in this case, the new dimension measure still exhibits scaling with two distinct scaling regimes. Our observations suggest that the link length distribution is not sufficient in itself to entirely control the dimensionality of complex networks, and we show that the proposed measure provides information that complements other known measures.
|
In the last several years, a number of methods have appeared in the literature that were successful in identifying scale-invariant properties in small-world complex networks. Probably, the earliest such concept is the @cite_11 which originates from random walks on the network (see sec_method ), and was applied to both theoretical models of networks @cite_1 and empirical datasets spectral2 . Additionally, the application of the box-counting dimension @cite_43 to networks was also proposed @cite_0 , and further generalized to reveal fractal properties of complex networks @cite_16 @cite_41 . The scaling exponents arising from these methods are usually interpreted as a special type of network dimension.
|
{
"cite_N": [
"@cite_41",
"@cite_1",
"@cite_0",
"@cite_43",
"@cite_16",
"@cite_11"
],
"mid": [
"2140948827",
"1504623951",
"1982573076",
"",
"2160058983",
""
],
"abstract": [
"The fractal and the small-world properties of complex networks are systematically studied both in the box-covering (BC) and the cluster-growing (CG) measurements. We elucidate that complex networks possessing the fractal (small-world) nature in the BC measurement are always fractal (small world) even in the CG measurement and vice versa, while the fractal dimensions d B by the BC measurement and d C by the CG measurement are generally different. This implies that two structural properties of networks, fractality and small worldness, cannot coexist in the same length scale. These properties can, however, crossover from one to the other by varying the length scale. We show that the crossover behavior in a network near the percolation transition appears both in the BC and CG measurements and is scaled by a unique characteristic length ξ.",
"The spectral dimension has been widely used to understand transport properties on regular and fractal lattices. Nevertheless, it has been little studied for complex networks such as scale-free and small world networks. Here we study the spectral dimension and the return-to-origin probability of random walks on hierarchical scale-free networks, which can be either fractals or non-fractals depending on the weight of shortcuts. Applying the renormalization group (RG) approach to the Gaussian model, we obtain the spectral dimension exactly. While the spectral dimension varies between @math and @math for the fractal case, it remains at @math , independent of the variation of network structure for the non-fractal case. The crossover behavior between the two cases is studied through the RG flow analysis. The analytic results are confirmed by simulation results and their implications for the architecture of complex systems are discussed.",
"Covering a network with the minimum possible number of boxes can reveal interesting features for the network structure, especially in terms of self-similar or fractal characteristics. Considerable attention has been recently devoted to this problem, with the finding that many real networks are self-similar fractals. Here we present, compare and study in detail a number of algorithms that we have used in previous papers towards this goal. We show that this problem can be mapped to the well-known graph colouring problem and then we simply can apply well-established algorithms. This seems to be the most efficient method, but we also present two other algorithms based on burning which provide a number of other benefits. We argue that the algorithms presented provide a solution close to optimal and that another algorithm that can significantly improve this result in an efficient way does not exist. We offer to anyone that finds such a method to cover his her expenses for a one-week trip to our lab in New York (details in http: jamlab.org).",
"",
"Complex networks from such different fields as biology, technology or sociology share similar organization principles. The possibility of a unique growth mechanism promises to uncover universal origins of collective behaviour. In particular, the emergence of self-similarity in complex networks raises the fundamental question of the growth process according to which these structures evolve. Here we investigate the concept of renormalization as a mechanism for the growth of fractal and non-fractal modular networks. We show that the key principle that gives rise to the fractal architecture of networks is a strong effective ‘repulsion’ (or, disassortativity) between the most connected nodes (that is, the hubs) on all length scales, rendering them very dispersed. More importantly, we show that a robust network comprising functional modules, such as a cellular network, necessitates a fractal topology, suggestive of an evolutionary drive for their existence.",
""
]
}
|
1308.6075
|
2157516452
|
Scaling phenomena have been intensively studied during the past decade in the context of complex networks. As part of these works, recently novel methods have appeared to measure the dimension of abstract and spatially embedded networks. In this paper we propose a new dimension measurement method for networks, which does not require global knowledge on the embedding of the nodes, instead it exploits link-wise information (link lengths, link delays or other physical quantities). Our method can be regarded as a generalization of the spectral dimension, that grasps the network’s large-scale structure through local observations made by a random walker while traversing the links. We apply the presented method to synthetic and real-world networks, including road maps, the Internet infrastructure and the Gowalla geosocial network. We analyze the theoretically and empirically designated case when the length distribution of the links has the form P(ρ)∼1 ρ. We show that while previous dimension concepts are not applicable in this case, the new dimension measure still exhibits scaling with two distinct scaling regimes. Our observations suggest that the link length distribution is not sufficient in itself to entirely control the dimensionality of complex networks, and we show that the proposed measure provides information that complements other known measures.
|
Beyond these methods, which handle networks as abstract graphs, there have been an increasing interest in including the spatial properties of networks, too (see e.g. @cite_20 @cite_31 @cite_40 @cite_45 @cite_33 @cite_15 @cite_35 ). Particularly, in the context of dimension measurements, the presence of spatial information enables the application of well-known approaches @cite_6 @cite_25 to determine the fractal dimension of the point set of network nodes @cite_31 . A shortcoming of these approaches may be, that while they take into account the geometric layout of the network, they entirely neglect its connectivity information. Recently, in @cite_26 Daqing have proposed more suitable methods to overcome this limitation. The authors combine metric and topological knowledge to yield more comprehensive measures of dimensionality.
|
{
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_33",
"@cite_15",
"@cite_6",
"@cite_40",
"@cite_45",
"@cite_31",
"@cite_25",
"@cite_20"
],
"mid": [
"1515978208",
"1975428390",
"",
"",
"2056651346",
"2115785474",
"",
"2139708889",
"",
""
],
"abstract": [
"The Internet revolution has made long-distance communication dramatically faster, easier, and cheaper than ever before. This, it has been argued, has decreased the importance of geographic proximity in social interactions, transforming our world into a global village with a borderless society. We argue for the opposite: while technology has undoubtedly increased the overall level of communication, this increase has been most pronounced for local social ties. We show that the volume of electronic communications is inversely proportional to geographic distance, following a Power Law. We directly study the importance of physical proximity in social interactions by analyzing the spatial dissemination of new baby names. Counter-intuitively, and in line with the above argument, the importance of geographic proximity has dramatically increased with the internet revolution.",
"Many properties of physical systems are known to depend on their dimension. But for complex networks—which serve to model a wide range of physical, technological and social systems—the concept of dimension has received relatively little attention so far. This study shows how the dimension of a broad class of networks can be ascertained, and demonstrates that it determines the basic properties of the networks.",
"",
"",
"A new measure of strange attractors is introduced which offers a practical algorithm to determine their character from the time series of a single observable. The relation of this new measure to fractal dimension and information-theoretic entropy is discussed.",
"In this paper, we analyze statistical properties of a communication network constructed from the records of a mobile phone company. The network consists of 2.5 million customers that have placed 810 million communications (phone calls and text messages) over a period of 6 months and for whom we have geographical home localization information. It is shown that the degree distribution in this network has a power-law degree distribution k−5 and that the probability that two customers are connected by a link follows a gravity model, i.e. decreases as d−2, where d is the distance between the customers. We also consider the geographical extension of communication triangles and we show that communication triangles are not only composed of geographically adjacent nodes but that they may extend over large distances. This last property is not captured by the existing models of geographical networks and in a last section we propose a new model that reproduces the observed property. Our model, which is based on the migration and on the local adaptation of agents, is then studied analytically and the resulting predictions are confirmed by computer simulations.",
"",
"Network generators that capture the Internet's large-scale topology are crucial for the development of efficient routing protocols and modeling Internet traffic. Our ability to design realistic generators is limited by the incomplete understanding of the fundamental driving forces that affect the Internet's evolution. By combining several independent databases capturing the time evolution, topology, and physical layout of the Internet, we identify the universal mechanisms that shape the Internet's router and autonomous system level topology. We find that the physical layout of nodes form a fractal set, determined by population density patterns around the globe. The placement of links is driven by competition between preferential attachment and linear distance dependence, a marked departure from the currently used exponential laws. The universal parameters that we extract significantly restrict the class of potentially correct Internet models and indicate that the networks created by all available topology generators are fundamentally different from the current Internet.",
"",
""
]
}
|
1308.6075
|
2157516452
|
Scaling phenomena have been intensively studied during the past decade in the context of complex networks. As part of these works, recently novel methods have appeared to measure the dimension of abstract and spatially embedded networks. In this paper we propose a new dimension measurement method for networks, which does not require global knowledge on the embedding of the nodes, instead it exploits link-wise information (link lengths, link delays or other physical quantities). Our method can be regarded as a generalization of the spectral dimension, that grasps the network’s large-scale structure through local observations made by a random walker while traversing the links. We apply the presented method to synthetic and real-world networks, including road maps, the Internet infrastructure and the Gowalla geosocial network. We analyze the theoretically and empirically designated case when the length distribution of the links has the form P(ρ)∼1 ρ. We show that while previous dimension concepts are not applicable in this case, the new dimension measure still exhibits scaling with two distinct scaling regimes. Our observations suggest that the link length distribution is not sufficient in itself to entirely control the dimensionality of complex networks, and we show that the proposed measure provides information that complements other known measures.
|
We note that the spectral dimension concept is closely related to the spectral density function of the transition matrix. In the continuous limit, @math arises as the Laplace-transform of the spectral density @cite_11 @cite_42 . Indeed, the spectra of various matrices associated with a network have been intensively studied, also in the context of diffusion @cite_42 @cite_17 @cite_34 @cite_36 and the spectral dimension was found to be a valuable tool to describe topological properties of real-world networks @cite_30 . A more exotic case is that of quantum gravity, where the spectral dimension of the networks defined by the possible triangulations of space-time can be interpreted as the perceived dimension of the Universe @cite_8 .
|
{
"cite_N": [
"@cite_30",
"@cite_11",
"@cite_8",
"@cite_36",
"@cite_42",
"@cite_34",
"@cite_17"
],
"mid": [
"1992151851",
"",
"2019238454",
"1978345999",
"2057905655",
"2048347824",
"2075984423"
],
"abstract": [
"Topological properties of \"scale-free\" networks are investigated by determining their spectral dimensions d(S), which reflect a diffusion process in the corresponding graphs. Data bases for citation networks and metabolic networks together with simulation results from the growing network model [A.-L. Barabasi and R. Albert, Science 286, 509 (1999)] are probed. For completeness and comparisons lattice, random and small-world models are also investigated. We find that d(S) is around 3 for citation and metabolic networks, which is significantly different from the growing network model, for which d(S) is approximately 7.5. This signals a substantial difference in network topology despite the observed similarities in vertex-order distributions. In addition, the diffusion analysis indicates that the citation networks are treelike in structure, whereas the metabolic networks contain many loops. (Less)",
"",
"We measure the spectral dimension of universes emerging from nonperturbative quantum gravity, defined through state sums of causal triangulated geometries. While four dimensional on large scales, the quantum universe appears two dimensional at short distances. We conclude that quantum gravity may be self-renormalizing'' at the Planck scale, by virtue of a mechanism of dynamical dimensional reduction.",
"The spectral densities of the weighted Laplacian, random walk, and weighted adjacency matrices associated with a random complex network are studied using the replica method. The link weights are parametrized by a weight exponent β. Explicit results are obtained for scale-free networks in the limit of large mean degree after the thermodynamic limit, for arbitrary degree exponent and β.",
"",
"The complete knowledge of Laplacian eigenvalues and eigenvectors of complex networks plays an outstanding role in understanding various dynamical processes running on them; however, determining analytically Laplacian eigenvalues and eigenvectors is a theoretical challenge. In this paper, we study the Laplacian spectra and their corresponding eigenvectors of a class of deterministically growing treelike networks. The two interesting quantities are determined through the recurrence relations derived from the structure of the networks. Beginning from the rigorous relations one can obtain the complete eigenvalues and eigenvectors for the networks of arbitrary size. The analytical method opens the way to analytically compute the eigenvalues and eigenvectors of some other deterministic networks, making it possible to accurately calculate their spectral characteristics.",
"results on the spectra of adjacency matrices corresponding to models of real-world graphs. We find that when the number of links grows as the number of nodes, the spectral density of uncorrelated random matrices does not converge to the semicircle law. Furthermore, the spectra of real-world graphs have specific features, depending on the details of the corresponding models. In particular, scale-free graphs develop a trianglelike spectral density with a power-law tail, while small-world graphs have a complex spectral density consisting of several sharp peaks. These and further results indicate that the spectra of correlated graphs represent a practical tool for graph classification and can provide useful insight into the relevant structural properties of real networks."
]
}
|
1308.6075
|
2157516452
|
Scaling phenomena have been intensively studied during the past decade in the context of complex networks. As part of these works, recently novel methods have appeared to measure the dimension of abstract and spatially embedded networks. In this paper we propose a new dimension measurement method for networks, which does not require global knowledge on the embedding of the nodes, instead it exploits link-wise information (link lengths, link delays or other physical quantities). Our method can be regarded as a generalization of the spectral dimension, that grasps the network’s large-scale structure through local observations made by a random walker while traversing the links. We apply the presented method to synthetic and real-world networks, including road maps, the Internet infrastructure and the Gowalla geosocial network. We analyze the theoretically and empirically designated case when the length distribution of the links has the form P(ρ)∼1 ρ. We show that while previous dimension concepts are not applicable in this case, the new dimension measure still exhibits scaling with two distinct scaling regimes. Our observations suggest that the link length distribution is not sufficient in itself to entirely control the dimensionality of complex networks, and we show that the proposed measure provides information that complements other known measures.
|
A possible generalization of @math to spatially embedded networks is given in @cite_26 . A graph @math is said to be embedded into a metric space @math if each node in @math corresponds to a point in @math , and @math is a metric on @math , i.e. for each pair of nodes @math and @math we have a distance @math . In a general setting @math is the two or three dimensional Euclidean distance, but for many large-scale real world networks it is the spherical distance that plays the role of @math .
|
{
"cite_N": [
"@cite_26"
],
"mid": [
"1975428390"
],
"abstract": [
"Many properties of physical systems are known to depend on their dimension. But for complex networks—which serve to model a wide range of physical, technological and social systems—the concept of dimension has received relatively little attention so far. This study shows how the dimension of a broad class of networks can be ascertained, and demonstrates that it determines the basic properties of the networks."
]
}
|
1308.6075
|
2157516452
|
Scaling phenomena have been intensively studied during the past decade in the context of complex networks. As part of these works, recently novel methods have appeared to measure the dimension of abstract and spatially embedded networks. In this paper we propose a new dimension measurement method for networks, which does not require global knowledge on the embedding of the nodes, instead it exploits link-wise information (link lengths, link delays or other physical quantities). Our method can be regarded as a generalization of the spectral dimension, that grasps the network’s large-scale structure through local observations made by a random walker while traversing the links. We apply the presented method to synthetic and real-world networks, including road maps, the Internet infrastructure and the Gowalla geosocial network. We analyze the theoretically and empirically designated case when the length distribution of the links has the form P(ρ)∼1 ρ. We show that while previous dimension concepts are not applicable in this case, the new dimension measure still exhibits scaling with two distinct scaling regimes. Our observations suggest that the link length distribution is not sufficient in itself to entirely control the dimensionality of complex networks, and we show that the proposed measure provides information that complements other known measures.
|
In case of an embedded graph, the random walk process can be interpreted as a diffusion in the embedding space @math . Consequently, we can measure the exponent of the diffusion on the embedded graph via the scaling relation where @math is the root mean square (r.m.s.) displacement of the random walker at time @math : The diffusion exponent is @math for regular lattices in any dimension, while for real world systems it often exhibits anomalous behavior with @math @cite_44 .
|
{
"cite_N": [
"@cite_44"
],
"mid": [
"2100376401"
],
"abstract": [
"Diffusion in disordered systems does not follow the classical laws which describe transport in ordered crystalline media, and this leads to many anomalous physical properties. Since the application of percolation theory, the main advances in the understanding of these processes have come from fractal theory. Scaling theories and numerical simulations are important tools to describe diffusion processes (random walks: the 'ant in the labyrinth') on percolation systems and fractals. Different types of disordered systems exhibiting anomalous diffusion are presented (the incipient infinite percolation cluster, diffusion-limited aggregation clusters, lattice animals, and random combs), and scaling theories as well as numerical simulations of greater sophistication are described. Also, diffusion in the presence of singular distributions of transition rates is discussed and related to anomalous diffusion on disordered structures."
]
}
|
1308.6075
|
2157516452
|
Scaling phenomena have been intensively studied during the past decade in the context of complex networks. As part of these works, recently novel methods have appeared to measure the dimension of abstract and spatially embedded networks. In this paper we propose a new dimension measurement method for networks, which does not require global knowledge on the embedding of the nodes, instead it exploits link-wise information (link lengths, link delays or other physical quantities). Our method can be regarded as a generalization of the spectral dimension, that grasps the network’s large-scale structure through local observations made by a random walker while traversing the links. We apply the presented method to synthetic and real-world networks, including road maps, the Internet infrastructure and the Gowalla geosocial network. We analyze the theoretically and empirically designated case when the length distribution of the links has the form P(ρ)∼1 ρ. We show that while previous dimension concepts are not applicable in this case, the new dimension measure still exhibits scaling with two distinct scaling regimes. Our observations suggest that the link length distribution is not sufficient in itself to entirely control the dimensionality of complex networks, and we show that the proposed measure provides information that complements other known measures.
|
The spectral dimension concept employed by Daqing @cite_26 can be extracted from the scaling relation where @math . Here, the exponent gives an alternative measure for the dimension of the network: @math . In the case where the three scaling laws (Eqs. spectraldim , diff and daqing ) are all valid in the same range, the three exponents are related: @math . For regular @math dimensional lattices this relationship is satisfied as @math and @math . Nevertheless, for more complex networks, the scaling regimes may not coincide, or some of the scaling relationships might not hold at all.
|
{
"cite_N": [
"@cite_26"
],
"mid": [
"1975428390"
],
"abstract": [
"Many properties of physical systems are known to depend on their dimension. But for complex networks—which serve to model a wide range of physical, technological and social systems—the concept of dimension has received relatively little attention so far. This study shows how the dimension of a broad class of networks can be ascertained, and demonstrates that it determines the basic properties of the networks."
]
}
|
1308.5873
|
1996067920
|
Abstract Molecular dynamics simulations have a prominent role in biophysics and drug discovery due to the atomistic information they provide on the structure, energetics and dynamics of biomolecules. Specialized software packages are required to analyze simulated trajectories, either interactively or via scripts, to derive quantities of interest and provide insight for further experiments. This paper presents the Density Profile Tool, a package that enhances the Visual Molecular Dynamics environment with the ability to interactively compute and visualize 1-D projections of various density functions of molecular models. We describe how the plugin is used to perform computations both via a graphical interface and programmatically. Results are presented for realistic examples, all-atom bilayer models, showing how mass and electron densities readily provide measurements such as membrane thickness, location of structural elements, and how they compare to X-ray diffraction experiments. Program summary Program title: Density Profile Tool Catalogue identifier: AEQM_v1_0 Program summary URL: http: cpc.cs.qub.ac.uk summaries AEQM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: yes No. of lines in distributed program, including test data, etc.: 1742 No. of bytes in distributed program, including test data, etc.: 12764 Distribution format: tar.gz Programming language: TCL TK. Computer: Any, with or without graphical display. Operating system: Linux Unix, OSX, Windows. RAM: VMD should be able to hold the trajectory in memory. Classification: 3, 23. External routines: VMD (version 1.9 or higher) ( http: www.ks.uiuc.edu Research vmd ). Nature of problem: Compute and visualize one-dimensional density profiles of molecular dynamics trajectories in the VMD environment, either interactively or programmatically. Solution method: Density profiles are computed by binning the simulation space into slabs of finite thickness. A graphical user interface allows the choice of the atomic property (number, mass, charge, electrons) and the details of the binning. Restrictions: The current version only supports orthorhombic cells. Unusual features: The Density Profile Tool is not a standalone program but a plug-in that enhances VMD’s analysis features. Running time: A contemporary PC completes the analysis of 500 frames of the example system discussed in the paper (35,000 atoms) in under 1 min.
|
Other analysis packages contain command-line tools to perform density profile computations similar to the one described here. GROMACS' distribution, for example, provides +g_density+, a stand-alone executable meant to be used from the command-line or in shell scripts @cite_0 . This approach has the advantage of not being tied to a specific graphical or scripting environment; however, it can be limiting in three respects: first, a GUI may be desirable for quick one-off computations; second, binaries mostly require GROMACS-specific file formats and topologies; finally, performing computations in shell scripts implies a programming model in which text files are used to store intermediate results -- a model which is more cumbersome than manipulating variables, the route afforded by conventional programming languages like TCL or Python.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"1966078827"
],
"abstract": [
"Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of a..."
]
}
|
1308.5725
|
2131348552
|
Consider the Erdős–Renyi random graph on (n ) vertices where each edge is present independently with probability ( n ), with ( >0 ) fixed. For large (n ), a typical random graph locally behaves like a Galton–Watson tree with Poisson offspring distribution with mean ( ). Here, we study large deviations from this typical behavior within the framework of the local weak convergence of finite graph sequences. The associated rate function is expressed in terms of an entropy functional on unimodular measures and takes finite values only at measures supported on trees. We also establish large deviations for other commonly studied random graph ensembles such as the uniform random graph with given number of edges growing linearly with the number of vertices, or the uniform random graph with given degree sequence. To prove our results, we introduce a new configuration model which allows one to sample uniform random graphs with a given neighborhood distribution, provided the latter is supported on trees. We also introduce a new class of unimodular random trees, which generalizes the usual Galton Watson tree with given degree distribution to the case of neighborhoods of arbitrary finite depth. These generalized Galton Watson trees turn out to be useful in the analysis of unimodular random trees and may be considered to be of interest in their own right.
|
Large deviations in random graphs is a rapidly growing topic. For dense graphs, e.g. @math with fixed @math , a thorough treatment has been given recently by Chatterjee and Varadhan @cite_12 , in the framework of the cut topology introduced by Lov 'a sz and coauthors, see Borgs, Chayes, Lov 'a sz, S 'o s and Vesztergombi @cite_25 - @cite_14 . In the sparse regime, only a few partial results are known. O'Connell @cite_7 , Biskup, Chayes and Smith @cite_11 and Puhalskii @cite_4 have proven large deviation asymptotics for the connectivity and for the size of the connected components. Large deviations for degree sequences of Erd o s-R 'enyi graphs has been studied in Doku-Amponsah and M "o rters @cite_23 and Boucheron, Gamboa and L 'eonard [Theorem 7.1] BGL2002 . Closer to our approach, large deviations in the local weak topology were obtained for critical multi-type Galton-Watson trees by Dembo, M "o rters and Sheffield @cite_6 . Finally, large deviations for other models of statistical physics on Erd o s-R 'enyi graphs have been considered in Rivoire @cite_17 and Engel, Monasson, and Hartmann @cite_28 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_28",
"@cite_17",
"@cite_6",
"@cite_23",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"2137365113",
"1969411452",
"2063553966",
"2318709160",
"2132774091",
"2061282787",
"2162928332",
"2111754130",
"2041354341",
"2950173676"
],
"abstract": [
"We consider sequences of graphs (Gn) and dene various notions of convergence related to these sequences including -convergence,\" dened in terms of the densities of homomorphisms from small graphs into Gn, and -convergence,\" dened in terms of the densities of homomorphisms from Gn into small graphs. We show that right-convergence is equivalent to left-convergence, both for simple graphs Gn, and for graphs Gn with nontrivial nodeweights and edgeweights. Other equivalent conditions for convergence are given in terms of fundamental notions from combinatorics, such as maximum cuts and Szemer edi partitions, and fundamental notions from statistical physics, like energies and free energies. We thereby relate local and global properties of graph sequences. Quantitative forms of these results express the relationships among dierent measures of similarity of large graphs.",
"We study the asymptotics of large, moderate and normal deviations for the connected components of the sparse random graph by the method of stochastic processes. We obtain the logarithmic asymptotics of large deviations of the joint distribution of the number of connected components, of the sizes of the giant components and of the numbers of the excess edges of the giant components. For the supercritical case, we obtain the asymptotics of normal deviations and the logarithmic asymptotics of large and moderate deviations of the joint distribution of the number of components, of the size of the largest component and of the number of the excess edges of the largest component. For the critical case, we obtain the logarithmic asymptotics of moderate deviations of the joint distribution of the sizes of connected components and of the numbers of the excess edges. Some related asymptotics are also established. The proofs of the large and moderate deviation asymptotics employ methods of idempotent probability theory. As a byproduct of the results, we provide some additional insight into the nature of phase transitions in sparse random graphs.",
"We obtain a large deviation principle (LDP) for the relative size of the largest connected component in a random graph with small edge probability. The rate function, which is not convex in general, is determined explicitly using a new technique. The proof yields an asymptotic formula for the probability that the random graph is connected.",
"We show that large deviation properties of Erdos-Renyi random graphs can be derived from the free energy of the q-state Potts model of statistical mechanics. More precisely the Legendre transform of the Potts free energy with respect to ln q is related to the component generating function of the graph ensemble. This generalizes the well-known mapping between typical properties of random graphs and the q→ 1 limit of the Potts free energy. For exponentially rare graphs we explicitly calculate the number of components, the size of the giant component, the degree distributions inside and outside the giant component, and the distribution of small component sizes. We also perform numerical simulations which are in very good agreement with our analytical work. Finally we demonstrate how the same results can be derived by studying the evolution of random graphs under the insertion of new vertices and edges, without recourse to the thermodynamics of the Potts model.",
"The one-step replica symmetry breaking cavity method is proposed as a new tool to investigate large deviations in random graph ensembles. The procedure hinges on a general connection between negative complexities and probabilities of rare samples in spin glass like models. This relation between large deviations and replica theory is explicited on different models where it is confronted to direct combinatorial calculations.",
"Given a finite typed rooted tree T with n vertices, the empirical subtree measure is the uniform measure on the n typed subtrees of T formed by taking all descendants of a single vertex. We prove a large deviation principle in n, with explicit rate function, for the empirical subtree measures of multitype Galton–Watson trees conditioned to have exactly n vertices. In the process, we extend the notions of shift-invariance and specific relative entropy—as typically understood for Markov fields on deterministic graphs such as Zd—to Markov fields on random trees. We also develop single-generation empirical measure large deviation principles for a more general class of random trees including trees sampled uniformly from the set of all trees with n vertices.",
"For any finite colored graph we define the empirical neighborhood measure, which counts the number of vertices of a given color connected to a given number of vertices of each color, and the empirical pair measure, which counts the number of edges connecting each pair of colors. For a class of models of sparse colored random graphs, we prove large deviation principles for these empirical measures in the weak topology. The rate functions governing our large deviation principles can be expressed explicitly in terms of relative entropies. We derive a large deviation principle for the degree distribution of Erdos-Renyi graphs near criticality.",
"We consider sequences of graphs (Gn) and define various notions of convergence related to these sequences: “left convergence” defined in terms of the densities of homomorphisms from small graphs into Gn; “right convergence” defined in terms of the densities of homomorphisms from Gn into small graphs; and convergence in a suitably defined metric. In Part I of this series, we show that left convergence is equivalent to convergence in metric, both for simple graphs Gn, and for graphs Gn with nodeweights and edgeweights. One of the main steps here is the introduction of a cut-distance comparing graphs, not necessarily of the same size. We also show how these notions of convergence provide natural",
"What does an Erdos-Renyi graph look like when a rare event happens? This paper answers this question when p is fixed and n tends to infinity by establishing a large deviation principle under an appropriate topology. The formulation and proof of the main result uses the recent development of the theory of graph limits by Lovasz and coauthors and Szemeredi's regularity lemma from graph theory. As a basic application of the general principle, we work out large deviations for the number of triangles in G(n,p). Surprisingly, even this simple example yields an interesting double phase transition.",
"We present a large-deviations thermodynamic approach to the classic problem of percolation on the complete graph. Specifically, we determine the large-deviation rate function for the probability that the giant component occupies a fixed fraction of the graph while all other components are small.'' One consequence is an immediate derivation of the cavity'' formula for the fraction of vertices in the giant component. As a by-product of our analysis we compute the large-deviation rate functions for the probability of the event that the random graph is connected, the event that it contains no cycles and the event that it contains only small'' components."
]
}
|
1308.5725
|
2131348552
|
Consider the Erdős–Renyi random graph on (n ) vertices where each edge is present independently with probability ( n ), with ( >0 ) fixed. For large (n ), a typical random graph locally behaves like a Galton–Watson tree with Poisson offspring distribution with mean ( ). Here, we study large deviations from this typical behavior within the framework of the local weak convergence of finite graph sequences. The associated rate function is expressed in terms of an entropy functional on unimodular measures and takes finite values only at measures supported on trees. We also establish large deviations for other commonly studied random graph ensembles such as the uniform random graph with given number of edges growing linearly with the number of vertices, or the uniform random graph with given degree sequence. To prove our results, we introduce a new configuration model which allows one to sample uniform random graphs with a given neighborhood distribution, provided the latter is supported on trees. We also introduce a new class of unimodular random trees, which generalizes the usual Galton Watson tree with given degree distribution to the case of neighborhoods of arbitrary finite depth. These generalized Galton Watson trees turn out to be useful in the analysis of unimodular random trees and may be considered to be of interest in their own right.
|
As far as we know, this is the first time that large deviations of the neighborhood distribution are addressed in a systematic way. While our approach does not cover results on connectivity and the size of connected components such as @cite_7 , it does yield a simplification of some of the existing arguments concerning the large deviations for degree sequences. We point out that our Corollary gives a corrected version of [Corollary 2.2] MR2759726 . Under a stronger sparsity assumption, large deviations of neighborhood distributions for random networks have been used in @cite_24 to study the large deviations of the spectral measure of certain random matrices.
|
{
"cite_N": [
"@cite_24",
"@cite_7"
],
"mid": [
"2053990689",
"2063553966"
],
"abstract": [
"We consider @math Hermitian matrices with i.i.d. entries @math whose tail probabilities @math behave like @math for some @math and @math . We establish a large deviation principle for the empirical spectral measure of @math with speed @math with a good rate function @math that is finite only if @math is of the form @math for some probability measure @math on @math , where @math denotes the free convolution and @math is Wigner's semicircle law. We obtain explicit expressions for @math in terms of the @math th moment of @math . The proof is based on the analysis of large deviations for the empirical distribution of very sparse random rooted networks.",
"We obtain a large deviation principle (LDP) for the relative size of the largest connected component in a random graph with small edge probability. The rate function, which is not convex in general, is determined explicitly using a new technique. The proof yields an asymptotic formula for the probability that the random graph is connected."
]
}
|
1308.5272
|
2951836131
|
Although production is an integral part of the Arrow-Debreu market model, most of the work in theoretical computer science has so far concentrated on markets without production, i.e., the exchange economy. This paper takes a significant step towards understanding computational aspects of markets with production. We first define the notion of separable, piecewise-linear concave (SPLC) production by analogy with SPLC utility functions. We then obtain a linear complementarity problem (LCP) formulation that captures exactly the set of equilibria for Arrow-Debreu markets with SPLC utilities and SPLC production, and we give a complementary pivot algorithm for finding an equilibrium. This settles a question asked by Eaves in 1975 of extending his complementary pivot algorithm to markets with production. Since this is a path-following algorithm, we obtain a proof of membership of this problem in PPAD, using Todd, 1976. We also obtain an elementary proof of existence of equilibrium (i.e., without using a fixed point theorem), rationality, and oddness of the number of equilibria. We further give a proof of PPAD-hardness for this problem and also for its restriction to markets with linear utilities and SPLC production. Experiments show that our algorithm runs fast on randomly chosen examples, and unlike previous approaches, it does not suffer from issues of numerical instability. Additionally, it is strongly polynomial when the number of goods or the number of agents and firms is constant. This extends the result of Devanur and Kannan (2008) to markets with production. Finally, we show that an LCP-based approach cannot be extended to PLC (non-separable) production, by constructing an example which has only irrational equilibria.
|
Jain and Varadarajan @cite_42 studied the Arrow-Debreu markets with production, and gave a polynomial time algorithm for production and utility functions coming from a subclass of CES (constant elasticity of substitution) functions; i.e., constant returns to scale (CRS) production. They also gave a reduction from the exchange market with CES utilities to a linear utilities market in which firms have CES production. Our reduction from the exchange market to a linear utilities market with arbitrary production is inspired by their reduction but is more general.
|
{
"cite_N": [
"@cite_42"
],
"mid": [
"2060179078"
],
"abstract": [
"We consider the computation of equilibria in two economic models that generalize the exchange model by including production. In the constant returns model, each producer has a convex, constant-returns-to-scale, technology. In particular, this means that if the technology can output a certain quantity of a good using as input certain quantities of other goods, then scaling all these quantities by a common, non-negative, number also results in a technologically feasible plan. The technology also accomodates the no-free-lunch property, which says that it is not possible to produce something from nothing. At a given price, the producer picks a technologically feasible plan that maximizes her profit. Associated with each consumer is an initial endowment of goods and a utility function that describes her preferences between various bundles of goods. At a given price, the consumer sells her initial endowment, thus obtaining a certain income, and demands the bundle of goods maximizing her utility among all bundles that she can afford at the given price with her income."
]
}
|
1308.4287
|
2951145705
|
Let G(n,d) be the random d-regular graph on n vertices. For any integer k exceeding a certain constant k_0 we identify a number d_ k-col such that G(n,d) is k-colorable w.h.p. if d d_ k-col .
|
With respect to random regular graphs @math , Frieze and @cite_20 proved a result akin to 's @cite_14 for @math . In fact, Cooper, Frieze, Reed and Riordan @cite_5 extended this result to the regime @math for any fixed @math , and Krivelevich, Sudakov, Vu and Wormald @cite_33 further still to @math . For @math fixed as @math , the bounds from @cite_20 were improved by the aforementioned contributions @cite_18 @cite_17 .
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_33",
"@cite_5",
"@cite_20",
"@cite_17"
],
"mid": [
"2165387581",
"2012724629",
"",
"2139757566",
"2018483383",
""
],
"abstract": [
"Given any integer d ≥ 3, let k be the smallest integer such that d < 2k log k. We prove that with high probability the chromatic number of a random d-regular graph is k, k+1, or k+2.",
"Let χ(G(n, p)) denote the chromatic number of the random graphG(n, p). We prove that there exists a constantd0 such that fornp(n)>d0,p(n)→0, the probability that @math tends to 1 asn→∞.",
"",
"Let r = r(n) → ∞ with 3 l r l n1−η for an arbitrarily small constant η > 0, and let Gr denote a graph chosen uniformly at random from the set of r-regular graphs with vertex set l1, 2, …, nr. We prove that, with probability tending to 1 as n → ∞, Gr has the following properties: the independence number of Gr is asymptotically 2n log r r and the chromatic number of Gr is asymptotically r 2nlogr.",
"Abstract Let G r denote a random r -regular graph with vertex set 1, 2, …, n and α ( G r ) and χ ( G r ) denote respectively its independence and chromatic numbers. We show that with probability going to 1 as n → ∞ respectively |δ(G r ) − 2n r ( log r − log log r + 1 − log 2)|⩽ γn r and |χ(G r ) − r 2 log r − 8r log log r ( log ) 2 | ⩽ 8r log log r ( log r ) 2 provided r = o ( n θ ), θ 1 3 , 0 r ≥ r e , where r e depends on e only.",
""
]
}
|
1308.4088
|
2952478211
|
Computing the roots of a univariate polynomial is a fundamental and long-studied problem of computational algebra with applications in mathematics, engineering, computer science, and the natural sciences. For isolating as well as for approximating all complex roots, the best algorithm known is based on an almost optimal method for approximate polynomial factorization, introduced by Pan in 2002. Pan's factorization algorithm goes back to the splitting circle method from Schoenhage in 1982. The main drawbacks of Pan's method are that it is quite involved and that all roots have to be computed at the same time. For the important special case, where only the real roots have to be computed, much simpler methods are used in practice; however, they considerably lag behind Pan's method with respect to complexity. In this paper, we resolve this discrepancy by introducing a hybrid of the Descartes method and Newton iteration, denoted ANEWDSC, which is simpler than Pan's method, but achieves a run-time comparable to it. Our algorithm computes isolating intervals for the real roots of any real square-free polynomial, given by an oracle that provides arbitrary good approximations of the polynomial's coefficients. ANEWDSC can also be used to only isolate the roots in a given interval and to refine the isolating intervals to an arbitrary small size; it achieves near optimal complexity for the latter task.
|
Isolating the roots of a polynomial is a fundamental and well-studied problem. One is either interested in isolating all roots, or all real roots, or all roots in a certain subset of the complex plane. A related problem is the approximate factorization of a polynomial, that is, to find @math to @math such that @math is small. Given the number of distinct complex roots of a polynomial @math , one can derive isolating disks for all roots from a sufficiently good approximate factorization of @math ; see @cite_26 . In particular, this approach applies to polynomials that are known to be square-free.
|
{
"cite_N": [
"@cite_26"
],
"mid": [
"2134534233"
],
"abstract": [
"We present an algorithm for isolating all roots of an arbitrary complex polynomial p that also works in the presence of multiple roots provided that (1) the number of distinct roots is given as part of the input and (2) the algorithm can ask for arbitrarily good approximations of the coefficients of p. The algorithm outputs pairwise disjoint disks each containing one of the distinct roots of p and the multiplicity of the root contained in the disk. The algorithm uses approximate factorization as a subroutine. For the case where Pan's algorithm (Pan, 2002) is used for the factorization, we derive complexity bounds for the problems of isolating and refining all roots, which are stated in terms of the geometric locations of the roots only. Specializing the latter bounds to a polynomial of degree d and with integer coefficients of bitsize less than @t, we show that [email protected]?(d^3+d^[email protected][email protected]) bit operations are sufficient to compute isolating disks of size less than 2^-^@k for all roots of p, where @k is an arbitrary positive integer. In addition, we apply our root isolation algorithm to a recent algorithm for computing the topology of a real planar algebraic curve specified as the zero set of a bivariate integer polynomial and for isolating the real solutions of a bivariate polynomial system. For polynomials of degree n and bitsize @t, we improve the currently best running time from [email protected]?(n^[email protected]+n^[email protected]^2) (deterministic) to [email protected]?(n^6+n^[email protected]) (randomized) for topology computation and from [email protected]?(n^8+n^[email protected]) (deterministic) to [email protected]?(n^6+n^[email protected]) (randomized) for solving bivariate systems."
]
}
|
1308.4088
|
2952478211
|
Computing the roots of a univariate polynomial is a fundamental and long-studied problem of computational algebra with applications in mathematics, engineering, computer science, and the natural sciences. For isolating as well as for approximating all complex roots, the best algorithm known is based on an almost optimal method for approximate polynomial factorization, introduced by Pan in 2002. Pan's factorization algorithm goes back to the splitting circle method from Schoenhage in 1982. The main drawbacks of Pan's method are that it is quite involved and that all roots have to be computed at the same time. For the important special case, where only the real roots have to be computed, much simpler methods are used in practice; however, they considerably lag behind Pan's method with respect to complexity. In this paper, we resolve this discrepancy by introducing a hybrid of the Descartes method and Newton iteration, denoted ANEWDSC, which is simpler than Pan's method, but achieves a run-time comparable to it. Our algorithm computes isolating intervals for the real roots of any real square-free polynomial, given by an oracle that provides arbitrary good approximations of the polynomial's coefficients. ANEWDSC can also be used to only isolate the roots in a given interval and to refine the isolating intervals to an arbitrary small size; it achieves near optimal complexity for the latter task.
|
Many algorithms for approximate factorization and root isolation are known, see @cite_12 @cite_37 @cite_33 @cite_15 for surveys. The algorithms can be roughly split into two groups: There are iterative methods for simultaneously approximating all roots (or a single root if a sufficiently good approximation is already known); there are subdivision methods that start with a region containing all the roots of interest, subdivide this region according to certain rules, and use inclusion- and exclusion-predicates to certify that a region contains exactly one root or no root. Prominent examples of the former group are the Aberth-Ehrlich method (used for @cite_0 ) and the Weierstrass-Durand-Kerner method. These algorithms work well in practice and are widely used. However, a complexity analysis and global convergence proof is missing. Prominent examples of the second group are the Descartes method @cite_28 @cite_40 @cite_1 @cite_36 , the Bolzano method @cite_19 @cite_31 , the Sturm method @cite_8 , the continued fraction method @cite_44 @cite_21 @cite_23 , and the splitting circle method @cite_27 @cite_29 .
|
{
"cite_N": [
"@cite_37",
"@cite_31",
"@cite_33",
"@cite_8",
"@cite_28",
"@cite_36",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_40",
"@cite_44",
"@cite_27",
"@cite_23",
"@cite_15",
"@cite_12"
],
"mid": [
"627545049",
"",
"",
"74955198",
"",
"",
"2144619406",
"2048184489",
"",
"",
"2084184830",
"1511166374",
"2099287615",
"2138897906",
"2168222442",
"1995250895",
"1531175519"
],
"abstract": [
"Numerical Methods for Roots of Polynomials - Part II along with Part I (9780444527295) covers most of the traditional methods for polynomial root-finding such as interpolation and methods due to Graeffe, Laguerre, and Jenkins and Traub. It includes many other methods and topics as well and has a chapter devoted to certain modern virtually optimal methods. Additionally, there are pointers to robust and efficient programs. This book is invaluable to anyone doing research in polynomial roots, or teaching a graduate course on that topic. It is the first comprehensive treatment of Root-Finding in several decades with a description of high-grade software and where it can be downloaded. It offers a long chapter on matrix methods and includes Parallel methods and errors where appropriate. It proves invaluable for research or graduate course.",
"",
"",
"This paper presents two results on the complexity of root isolation via Sturm sequences. Both results exploit amortization arguments.",
"",
"",
"To approximate all roots (zeros) of a univariate polynomial, we develop two effective algorithms and combine them in a single recursive process. One algorithm computes a basic well isolated zero-free annulus on the complex plane, whereas another algorithm numerically splits the input polynomial of the nth degree into two factors balanced in the degrees and with the zero sets separated by the basic annulus, Recursive combination of the two algorithms leads to computation of the complete numerical factorization of a polynomial into the product of linear factors and further to the approximation of the roots. The new root-finder incorporates the earlier techniques of Schonhage, Neff Reif, and Kirrinnis and our old and new techniques and yields nearly optimal (up to polylogarithmic factors) arithmetic and Boolean cost estimates for the computational complexity of both complete factorization and root-finding. The improvement over our previous record Boolean complexity estimates is by roughly the factor of n for complete factorization and also for the approximation of well-conditioned (well isolated) roots, whereas the same algorithm is also optimal (under both arithmetic and Boolean models of computing) for the worst case input polynomial, whose roots can be ill-conditioned, forming clusters. (The worst case complexity bounds for root-finding are supported by our previous algorithms as well.) All algorithms allow processor efficient acceleration to achieve solution in polylogarithmic parallel time.",
"In this paper, we provide polynomial bounds on the worst case bit-complexity of two formulations of the continued fraction algorithm. In particular, for a square-free integer polynomial of degree n with coefficients of bit-length L, we show that the bit-complexity of Akritas' formulation is O@?(n^8L^3), and the bit-complexity of a formulation by Akritas and Strzebonski is O@?(n^7L^2); here O@? indicates that we are omitting logarithmic factors. The analyses use a bound by Hong to compute the floor of the smallest positive root of a polynomial, which is a crucial step in the continued fraction algorithm. We also propose a modification of the latter formulation that achieves a bit-complexity of O@?(n^5L^2).",
"",
"",
"Let f be a univariate polynomial with real coefficients, f@?R[X]. Subdivision algorithms based on algebraic techniques (e.g., Sturm or Descartes methods) are widely used for isolating the real roots of f in a given interval. In this paper, we consider a simple subdivision algorithm whose primitives are purely numerical (e.g., function evaluation). The complexity of this algorithm is adaptive because the algorithm makes decisions based on local data. The complexity analysis of adaptive algorithms (and this algorithm in particular) is a new challenge for computer science. In this paper, we compute the size of the subdivision tree for the SqFreeEVAL algorithm. The SqFreeEVAL algorithm is an evaluation-based numerical algorithm which is well-known in several communities. The algorithm itself is simple, but prior attempts to compute its complexity have proven to be quite technical and have yielded sub-optimal results. Our main result is a simple O(d(L+lnd)) bound on the size of the subdivision tree for the SqFreeEVAL algorithm on the benchmark problem of isolating all real roots of an integer polynomial f of degree d and whose coefficients can be written with at most L bits. Our proof uses two amortization-based techniques: first, we use the algebraic amortization technique of the standard Mahler-Davenport root bounds to interpret the integral in terms of d and L. Second, we use a continuous amortization technique based on an integral to bound the size of the subdivision tree. This paper is the first to use the novel analysis technique of continuous amortization to derive state of the art complexity bounds.",
"Collins und Akritas (1976) have described the Descartes method for isolating the real roots of an integer polynomial in one variable. This method recursively subdivides an initial interval until Descartes' Rule of Signs indicates that all roots have been isolated. The partial converse of Descartes' Rule by Obreshkoff (1952) in conjunction with the bound of Mahler (1964) and Davenport (1985) leads us to an asymptotically almost tight bound for the resulting subdivision tree. It implies directly the best known complexity bounds for the equivalent forms of the Descartes method in the power basis (Collins Akritas, 1976), the Bernstein basis (Lane Riesenfeld, 1981) and the scaled Bernstein basis (Johnson, 1991), which are presented here in a unified fashion. Without losing correctness of the output, we modify the Descartes method such that it can handle bitstream coefficients, which can be approximated arbitrarily well but cannot be determined exactly. We analyze the computing time and precision requirements. The method described elsewhere by the author together with Kerber Wolpert (2007) and Kerber (2008) to determine the arrangement of plane algebraic curves rests in an essential way on variants of the bitstream Descartes algorithm; we analyze a central part of it. Collins und Akritas (1976) haben das Descartes-Verfahren zur Einschliesung der reellen Nullstellen eines ganzzahligen Polynoms in einer Veranderlichen angegeben. Das Verfahren unterteilt rekursiv ein Ausgangsintervall, bis die Descartes'sche Vorzeichenregel anzeigt, dass alle Nullstellen getrennt worden sind. Die partielle Umkehrung der Descartes'schen Regel nach Obreschkoff (1952) in Verbindung mit der Schranke von Mahler (1964) und Davenport (1985) fuhrt uns auf eine asymptotisch fast scharfe Schranke fur den sich ergebenden Unterteilungsbaum. Daraus folgen direkt die besten bekannten Komplexitatsschranken fur die aquivalenten Formen des Descartes-Verfahrens in der Monom-Basis (Collins Akritas, 1976), der Bernstein-Basis (Lane Riesenfeld, 1981) und der skalierten Bernstein-Basis (Johnson, 1991), die hier vereinheitlicht dargestellt werden. Ohne dass die Korrektheit der Ausgabe verloren geht, modifizieren wir das Descartes-Verfahren so, dass es mit \"Bitstream\"-Koeffizienten umgehen kann, die beliebig genau angenahert, aber nicht exakt bestimmt werden konnen. Wir analysieren die erforderliche Rechenzeit und Prazision. Das vom Verfasser mit Kerber Wolpert (2007) und Kerber (2008) an anderer Stelle beschriebene Verfahren zur Bestimmung des Arrangements (der Schnittfigur) ebener algebraischer Kurven fust wesentlich auf Varianten des Bitstream-Descartes-Verfahrens; wir analysieren einen zentralen Teil davon.",
"Recent progress in polynomial elimination has rendered the computation of the real roots of ill-conditioned polynomials of high degree (over 1000) with huge coefficients (several thousand digits) a critica l operation in computer algebra. To rise to the occasion, the only method-candidate that has been considered by various authors for modification and improvement has been th e Collins-Akritas bisection method (1), which is a based on a variation of Vincent's theor em (2). The most recent example is the paper by Rouillier and Zimmermann (3), where the authors present",
"",
"We present algorithmic, complexity and implementation results concerning real root isolation of integer univariate polynomials using the continued fraction expansion of real algebraic numbers. One motivation is to explain the method's good performance in practice. We derive an expected complexity bound of [email protected]?\"B(d^6+d^[email protected]^2), where d is the polynomial degree and @t bounds the coefficient bit size, using a standard bound on the expected bit size of the integers in the continued fraction expansion, thus matching the current worst-case complexity bound for real root isolation by exact methods (Sturm, Descartes and Bernstein subdivision). Moreover, using a homothetic transformation we improve the expected complexity bound to [email protected]?\"B(d^[email protected]). We compute the multiplicities within the same complexity and extend the algorithm to non-square-free polynomials. Finally, we present an open-source C++ implementation in the algebraic library synaps, and illustrate its completeness and efficiency as compared to some other available software. For this we use polynomials with coefficient bit size up to 8000 bits and degree up to 1000.",
"The classical problem of solving an nth degree polynomial equation has substantially influenced the development of mathematics throughout the centuries and still has several important applications to the theory and practice of present-day computing. We briefly recall the history of the algorithmic approach to this problem and then review some successful solution algorithms. We end by outlining some algorithms of 1995 that solve this problem at a surprisingly low computational cost.",
""
]
}
|
1308.4088
|
2952478211
|
Computing the roots of a univariate polynomial is a fundamental and long-studied problem of computational algebra with applications in mathematics, engineering, computer science, and the natural sciences. For isolating as well as for approximating all complex roots, the best algorithm known is based on an almost optimal method for approximate polynomial factorization, introduced by Pan in 2002. Pan's factorization algorithm goes back to the splitting circle method from Schoenhage in 1982. The main drawbacks of Pan's method are that it is quite involved and that all roots have to be computed at the same time. For the important special case, where only the real roots have to be computed, much simpler methods are used in practice; however, they considerably lag behind Pan's method with respect to complexity. In this paper, we resolve this discrepancy by introducing a hybrid of the Descartes method and Newton iteration, denoted ANEWDSC, which is simpler than Pan's method, but achieves a run-time comparable to it. Our algorithm computes isolating intervals for the real roots of any real square-free polynomial, given by an oracle that provides arbitrary good approximations of the polynomial's coefficients. ANEWDSC can also be used to only isolate the roots in a given interval and to refine the isolating intervals to an arbitrary small size; it achieves near optimal complexity for the latter task.
|
The Descartes, Sturm, and continued fraction methods isolate only the real roots. They are popular for their simplicity, ease of implementation, and practical efficiency. The papers @cite_38 @cite_4 @cite_36 report about implementations and experimental comparisons. The price for the simplicity is a considerably larger worst-case complexity. We concentrate on the Descartes method.
|
{
"cite_N": [
"@cite_36",
"@cite_38",
"@cite_4"
],
"mid": [
"",
"2134013328",
"2185262283"
],
"abstract": [
"",
"Real solving of univariate polynomials is a fundamental problem with several important applications. This paper is focused on the comparison of black-box implementations of state-of-the-art algorithms for isolating real roots of univariate polynomials over the integers. We have tested 9 different implementations based on symbolic-numeric methods, Sturm sequences, Continued Fractions and Descartes' rule of sign. The methods under consideration were developed at the GALAAD group at INRIA,the VEGAS group at LORIA and the MPI Saarbrucken. We compared their sensitivity with respect to various aspects such as degree, bitsize or root separation of the input polynomials. Our datasets consist of 5,000 polynomials from many different settings, which have maximum coefficient bitsize up to bits 8,000, and the total running time of the experiments was about 50 hours. Thereby, all implementations of the theoretically exact methods always provided correct results throughout this extensive study. For each scenario we identify the currently most adequate method, and we point to weaknesses in each approach, which should lead to further improvements. Our results indicate that there is no \"best method\" overall, but one can say that for most instances the solvers based on Continued Fractions are among the best methods. To the best of our knowledge, this is the largest number of tests for univariate real solving up to date.",
"This thesis deals with the application of subdivision based algorithms to the problem of isolating the roots of a complex polynomial. We provide a comprehensive comparison of the performance of three interval arithmetic based predicates (the interval Newton, Krawczyk and Hansen-Sengupta operators) with predicates based on complex analysis (the CEVAL algorithm and Yakoubsohn’s approach). In addition, we include a treatment of the mathematical theory behind these operators."
]
}
|
1308.4088
|
2952478211
|
Computing the roots of a univariate polynomial is a fundamental and long-studied problem of computational algebra with applications in mathematics, engineering, computer science, and the natural sciences. For isolating as well as for approximating all complex roots, the best algorithm known is based on an almost optimal method for approximate polynomial factorization, introduced by Pan in 2002. Pan's factorization algorithm goes back to the splitting circle method from Schoenhage in 1982. The main drawbacks of Pan's method are that it is quite involved and that all roots have to be computed at the same time. For the important special case, where only the real roots have to be computed, much simpler methods are used in practice; however, they considerably lag behind Pan's method with respect to complexity. In this paper, we resolve this discrepancy by introducing a hybrid of the Descartes method and Newton iteration, denoted ANEWDSC, which is simpler than Pan's method, but achieves a run-time comparable to it. Our algorithm computes isolating intervals for the real roots of any real square-free polynomial, given by an oracle that provides arbitrary good approximations of the polynomial's coefficients. ANEWDSC can also be used to only isolate the roots in a given interval and to refine the isolating intervals to an arbitrary small size; it achieves near optimal complexity for the latter task.
|
The standard Descartes method has a complexity of @math for isolating the real roots of an integer polynomial of degree @math with coefficients bounded by @math in absolute value, see @cite_17 . The size of the recursion tree is @math , and @math arithmetic operations on numbers of bitsize @math need to be performed at each node. For @math , these bounds are tight, that is, there are examples where the recursion tree has size @math and the numbers to be handled grow to integers of length @math bits.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2158622794"
],
"abstract": [
"We give a unified (\"basis free\") framework for the Descartes method for real root isolation of square-free real polynomials. This framework encompasses the usual Descartes' rule of sign method for polynomials in the power basis as well as its analog in the Bernstein basis. We then give a new bound on the size of the recursion tree in the Descartes method for polynomials with real coefficients. Applied to polynomials A(X) = Eni=0 aiXi with integer coefficients |ai| < 2L, this yields a bound of O(n(L + logn)) on the size of recursion trees. We show that this bound is tight for L = Ω(logn), and we use it to derive the best known bit complexity bound for the integer case."
]
}
|
1308.4088
|
2952478211
|
Computing the roots of a univariate polynomial is a fundamental and long-studied problem of computational algebra with applications in mathematics, engineering, computer science, and the natural sciences. For isolating as well as for approximating all complex roots, the best algorithm known is based on an almost optimal method for approximate polynomial factorization, introduced by Pan in 2002. Pan's factorization algorithm goes back to the splitting circle method from Schoenhage in 1982. The main drawbacks of Pan's method are that it is quite involved and that all roots have to be computed at the same time. For the important special case, where only the real roots have to be computed, much simpler methods are used in practice; however, they considerably lag behind Pan's method with respect to complexity. In this paper, we resolve this discrepancy by introducing a hybrid of the Descartes method and Newton iteration, denoted ANEWDSC, which is simpler than Pan's method, but achieves a run-time comparable to it. Our algorithm computes isolating intervals for the real roots of any real square-free polynomial, given by an oracle that provides arbitrary good approximations of the polynomial's coefficients. ANEWDSC can also be used to only isolate the roots in a given interval and to refine the isolating intervals to an arbitrary small size; it achieves near optimal complexity for the latter task.
|
The bit complexity of our new algorithm is @math for integer polynomials. Similar as in @cite_20 , the size of the recursion tree is @math due to the combination of bisection and Newton steps. The number of arithmetic operations per node is @math and arithmetic is on numbers of amortized length @math bits (instead of @math as in @cite_20 ) due to the use of approximate multipoint evaluation and approximate Taylor shift.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"1997509004"
],
"abstract": [
"We introduce a novel algorithm denoted NewDsc to isolate the real roots of a univariate square-free polynomial f with integer coefficients. The algorithm iteratively subdivides an initial interval which is known to contain all real roots of f and performs exact (rational) operations on the coefficients of f in each step. For the subdivision strategy, we combine Descartes' Rule of Signs and Newton iteration. More precisely, instead of using a fixed subdivision strategy such as bisection in each iteration, a Newton step based on the number of sign variations for an actual interval is considered, and, only if the Newton step fails, we fall back to bisection. Following this approach, quadratic convergence towards the real roots is achieved in most iterations. In terms of complexity, our method induces a recursion tree of almost optimal size O(n·log(nτ)), where n denotes the degree of the polynomial and τ the bitsize of its coefficients. The latter bound constitutes an improvement by a factor of τ upon all existing subdivision methods for the task of isolating the real roots. We further provide a detailed complexity analysis which shows that NewDsc needs only O(n3τ) bit operations to isolate all real roots of f. In comparison to existing asymptotically fast numerical algorithms (e.g. the algorithms by V. Pan and A. Schonhage), NewDsc is much easier to access and, due to its similarities to the classical Descartes method, it seems to be well suited for an efficient implementation."
]
}
|
1308.4088
|
2952478211
|
Computing the roots of a univariate polynomial is a fundamental and long-studied problem of computational algebra with applications in mathematics, engineering, computer science, and the natural sciences. For isolating as well as for approximating all complex roots, the best algorithm known is based on an almost optimal method for approximate polynomial factorization, introduced by Pan in 2002. Pan's factorization algorithm goes back to the splitting circle method from Schoenhage in 1982. The main drawbacks of Pan's method are that it is quite involved and that all roots have to be computed at the same time. For the important special case, where only the real roots have to be computed, much simpler methods are used in practice; however, they considerably lag behind Pan's method with respect to complexity. In this paper, we resolve this discrepancy by introducing a hybrid of the Descartes method and Newton iteration, denoted ANEWDSC, which is simpler than Pan's method, but achieves a run-time comparable to it. Our algorithm computes isolating intervals for the real roots of any real square-free polynomial, given by an oracle that provides arbitrary good approximations of the polynomial's coefficients. ANEWDSC can also be used to only isolate the roots in a given interval and to refine the isolating intervals to an arbitrary small size; it achieves near optimal complexity for the latter task.
|
Root refinement is the process of computing better approximations once the roots are isolated. @cite_3 @cite_35 @cite_13 @cite_26 @cite_16 , algorithms have been proposed which scale like @math for large @math . The former two algorithms are based on the splitting circle approach and compute approximations of all complex roots. The latter two solutions are dedicated to approximate only the real roots. They combine a fast convergence method (i.e., the secant method and Newton iteration, respectively) with approximate arithmetic and efficient multipoint evaluation; however, there are details missing in @cite_16 when using multipoint evaluation. In order to achieve complexity bounds comparable to the one stated in Theorem , the methods from @cite_3 @cite_16 need as input isolating intervals whose size is comparable to the separation of the corresponding root, that is, the roots must be "well isolated". This is typically achieved by using a fast method, such as Pan's method, for complex root isolation first. Our algorithm does not need such a preprocessing step.
|
{
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_3",
"@cite_16",
"@cite_13"
],
"mid": [
"2963548828",
"2134534233",
"",
"1664198596",
"2097541598"
],
"abstract": [
"Abstract In this paper, we give improved bounds for the computational complexity of computing with planar algebraic curves. More specifically, for arbitrary coprime polynomials f , g ∈ Z [ x , y ] and an arbitrary polynomial h ∈ Z [ x , y ] , each of total degree less than n and with integer coefficients of absolute value less than 2 τ , we show that each of the following problems can be solved in a deterministic way with a number of bit operations bounded by O ( n 6 + n 5 τ ) , where we ignore polylogarithmic factors in n and τ : • The computation of isolating regions in C 2 for all complex solutions of the system f = g = 0 , • the computation of a separating form for the solutions of f = g = 0 , • the computation of the sign of h at all real valued solutions of f = g = 0 , and • the computation of the topology of the planar algebraic curve C defined as the real valued vanishing set of the polynomial f . Our bound improves upon the best currently known bounds for the first three problems by a factor of n 2 or more and closes the gap to the state-of-the-art randomized complexity for the last problem.",
"We present an algorithm for isolating all roots of an arbitrary complex polynomial p that also works in the presence of multiple roots provided that (1) the number of distinct roots is given as part of the input and (2) the algorithm can ask for arbitrarily good approximations of the coefficients of p. The algorithm outputs pairwise disjoint disks each containing one of the distinct roots of p and the multiplicity of the root contained in the disk. The algorithm uses approximate factorization as a subroutine. For the case where Pan's algorithm (Pan, 2002) is used for the factorization, we derive complexity bounds for the problems of isolating and refining all roots, which are stated in terms of the geometric locations of the roots only. Specializing the latter bounds to a polynomial of degree d and with integer coefficients of bitsize less than @t, we show that [email protected]?(d^3+d^[email protected][email protected]) bit operations are sufficient to compute isolating disks of size less than 2^-^@k for all roots of p, where @k is an arbitrary positive integer. In addition, we apply our root isolation algorithm to a recent algorithm for computing the topology of a real planar algebraic curve specified as the zero set of a bivariate integer polynomial and for isolating the real solutions of a bivariate polynomial system. For polynomials of degree n and bitsize @t, we improve the currently best running time from [email protected]?(n^[email protected]+n^[email protected]^2) (deterministic) to [email protected]?(n^6+n^[email protected]) (randomized) for topology computation and from [email protected]?(n^8+n^[email protected]) (deterministic) to [email protected]?(n^6+n^[email protected]) (randomized) for solving bivariate systems.",
"",
"We introduce a new approach to isolate the real roots of a square-free polynomial @math with real coefficients. It is assumed that each coefficient of @math can be approximated to any specified error bound. The presented method is exact, complete and deterministic. Due to its similarities to the Descartes method, we also consider it practical and easy to implement. Compared to previous approaches, our new method achieves a significantly better bit complexity. It is further shown that the hardness of isolating the real roots of @math is exclusively determined by the geometry of the roots and not by the complexity or the size of the coefficients. For the special case where @math has integer coefficients of maximal bitsize @math , our bound on the bit complexity writes as @math which improves the best bounds known for existing practical algorithms by a factor of @math . The crucial idea underlying the new approach is to run an approximate version of the Descartes method, where, in each subdivision step, we only consider approximations of the intermediate results to a certain precision. We give an upper bound on the maximal precision that is needed for isolating the roots of @math . For integer polynomials, this bound is by a factor @math lower than that of the precision needed when using exact arithmetic explaining the improved bound on the bit complexity.",
"Abstract The subject of this paper is fast numerical algorithms for factoring univariate polynomials with complex coefficients and for computing partial fraction decompositions (PFDs) of rational functions in C ( z ). Numerically stable and computationally feasible versions of PFD are specified first for the special case of rational functions with all singularities in the unit disk (the “bounded case”) and then for rational functions with arbitrarily distributed singularities. Two major algorithms for computing PFDs are presented: The first one is an extension of the “splitting circle method” by A. Schonhage (“The Fundamental Theorem of Algebra in Terms of Computational Complexity,” Technical Report, Univ. Tubingen, 1982) for factoring polynomials in C [ z ] to an algorithm for PFD. The second algorithm is a Newton iteration for simultaneously improving the accuracy of all factors in an approximate factorization of a polynomial resp. all partial fractions of an approximate PFD of a rational function. Algorithmically useful starting value conditions for the Newton algorithm are provided. Three subalgorithms are of independent interest. They compute the product of a sequence of polynomials, the sum of a sequence of rational functions, and the modular representation of a polynomial. All algorithms are described in great detail, and numerous technical auxiliaries are provided which are also useful for the design and analysis of other algorithms in computational complex analysis. Combining the splitting circle method with simultaneous Newton iteration yields favourable time bounds (measured in bit operations) for PFD, polynomial factoring, and root calculation. In particular, the time bounds for computing high accuracy PFDs, high accuracy factorizations, and high accuracy approximations for zeros of squarefree polynomials are linear in the output size (and hence optimal) up to logarithmic factors."
]
}
|
1308.4088
|
2952478211
|
Computing the roots of a univariate polynomial is a fundamental and long-studied problem of computational algebra with applications in mathematics, engineering, computer science, and the natural sciences. For isolating as well as for approximating all complex roots, the best algorithm known is based on an almost optimal method for approximate polynomial factorization, introduced by Pan in 2002. Pan's factorization algorithm goes back to the splitting circle method from Schoenhage in 1982. The main drawbacks of Pan's method are that it is quite involved and that all roots have to be computed at the same time. For the important special case, where only the real roots have to be computed, much simpler methods are used in practice; however, they considerably lag behind Pan's method with respect to complexity. In this paper, we resolve this discrepancy by introducing a hybrid of the Descartes method and Newton iteration, denoted ANEWDSC, which is simpler than Pan's method, but achieves a run-time comparable to it. Our algorithm computes isolating intervals for the real roots of any real square-free polynomial, given by an oracle that provides arbitrary good approximations of the polynomial's coefficients. ANEWDSC can also be used to only isolate the roots in a given interval and to refine the isolating intervals to an arbitrary small size; it achieves near optimal complexity for the latter task.
|
Very recent work @cite_25 on isolating the real roots of a sparse integer polynomial @math makes crucial use of a slight modification of the subroutine Newton-Test as proposed in . There, it is used to refine an isolating interval @math for a root of @math in a number of arithmetic operations that is nearly linear in the number of roots that are close to @math and polynomial in @math , where @math and @math denotes the number of non-vanishing coefficients of @math . This eventually yields the first real root isolation algorithm that needs only a number of arithmetic operations over the rationals that is polynomial in the input size of the sparse representation of @math . Furthermore, for very sparse polynomials (i.e. @math with @math a constant), the algorithm from @cite_25 uses only @math bit operations to isolate all real roots of @math and is thus near-optimal.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2950326154"
],
"abstract": [
"Let @math be an arbitrary polynomial of degree @math with @math non-zero integer coefficients of absolute value less than @math . In this paper, we answer the open question whether the real roots of @math can be computed with a number of arithmetic operations over the rational numbers that is polynomial in the input size of the sparse representation of @math . More precisely, we give a deterministic, complete, and certified algorithm that determines isolating intervals for all real roots of @math with @math many exact arithmetic operations over the rational numbers. When using approximate but certified arithmetic, the bit complexity of our algorithm is bounded by @math , where @math means that we ignore logarithmic. Hence, for sufficiently sparse polynomials (i.e. @math for a positive constant @math ), the bit complexity is @math . We also prove that the latter bound is optimal up to logarithmic factors."
]
}
|
1308.3923
|
95053101
|
Many native ASP solvers exploit unfounded sets to compute consequences of a logic program via some form of well-founded negation, but disregard its contrapositive, well-founded justification (WFJ), due to computational cost. However, we demonstrate that this can hinder propagation of many relevant conditions such as reachability. In order to perform WFJ with low computational cost, we devise a method that approximates its consequences by computing dominators in a flowgraph, a problem for which linear-time algorithms exist. Furthermore, our method allows for additional unfounded set inference, called well-founded domination (WFD). We show that the effect of WFJ and WFD can be simulated for a important classes of logic programs that include reachability. This paper is a corrected and extended version of a paper published at the 12th International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR 2013). It has been adapted to exclude Theorem 10 and its consequences, but provides all missing proofs.
|
A straightforward way of computing answer sets of logic programs is a reduction to CNF-SAT. This may require the introduction of additional atoms. As shown by Lifschitz and Razborov @cite_13 , it is unlikely that, in general, a polynomial-size translation from ASP to CNF-SAT would not require additional atoms. Evidence is provided by the encoding of Lin and Zhao @cite_16 that has exponential space complexity. Another result, shown by Niemel "a @cite_10 , is that ASP cannot be translated into CNF-SAT in a and way. Reductions based on level-mappings devised in @cite_18 are non-modular but can be computed systematically, using only sub-quadratic space. An advantage of native ASP solvers like @cite_17 , @cite_9 , and @cite_0 over SAT-based systems @cite_4 @cite_18 @cite_16 , however, is that they can potentially propagate more consequences, e.g., using our techniques, faster.
|
{
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_4",
"@cite_9",
"@cite_0",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"2079430756",
"89328955",
"2100699693",
"1976055110",
"2011124182",
"2004414305",
"2152131859",
""
],
"abstract": [
"A theorem by Lin and Zhao shows how to turn any nondisjunctive logic program, understood in accordance with the answer set semantics, into an equivalent set of propositional formulas. The set of formulas generated by this process can be significantly larger than the original program. In this article we show (assuming P n NC1 s poly, a conjecture from the theory of computational complexity that is widely believed to be true) that this is inevitable: any equivalent translation from logic programs to propositional formulas involves a significant increase in size.",
"Propositional satisfiability (SAT) solvers provide a promising computational platform for logic programs under the stable model semantics. Computing stable models of a logic program using a SAT solver presumes translating the program into a set of clauses in the DIMACS format which is accepted by most SAT solvers as input. In this paper, we present succinct translations from programs with choice rules, cardinality rules, and weight rules--also known as SMODELS programs--to sets of clauses. These translations enable us to harness SAT solvers as black boxes to the task of computing stable models for logic programs generated by any SMODELS compatible grounder such as LPARSE or GRINGO. In the experimental part of this paper, we evaluate the potential of SAT solver technology in finding stable models using NP-complete benchmark problems employed in the Second Answer Set Programming Competition.",
"Answer set programming (ASP) emerged in the late 1990s as a new logic programming paradigm that has been successfully applied in various application domains. Also motivated by the availability of efficient solvers for propositional satisfiability (SAT), various reductions from logic programs to SAT were introduced. All these reductions, however, are limited to a subclass of logic programs or introduce new variables or may produce exponentially bigger propositional formulas. In this paper, we present a SAT-based procedure, called ASPSAT, that (1) deals with any (nondisjunctive) logic program, (2) works on a propositional formula without additional variables (except for those possibly introduced by the clause form transformation), and (3) is guaranteed to work in polynomial space. From a theoretical perspective, we prove soundness and completeness of ASPSAT. From a practical perspective, we have (1) implemented ASPSAT in Cmodels, (2) extended the basic procedures in order to incorporate the most popular SAT reasoning strategies, and (3) conducted an extensive comparative analysis involving other state-of-the-art answer set solvers. The experimental analysis shows that our solver is competitive with the other solvers we considered and that the reasoning strategies that work best on small but hard' problems are ineffective on big but easy' problems and vice versa.",
"Disjunctive Logic Programming (DLP) is an advanced formalism for knowledge representation and reasoning, which is very expressive in a precise mathematical sense: it allows one to express every property of finite structures that is decidable in the complexity class ΣP2 (NPNP). Thus, under widely believed assumptions, DLP is strictly more expressive than normal (disjunction-free) logic programming, whose expressiveness is limited to properties decidable in NP. Importantly, apart from enlarging the class of applications which can be encoded in the language, disjunction often allows for representing problems of lower complexity in a simpler and more natural fashion.This article presents the DLV system, which is widely considered the state-of-the-art implementation of disjunctive logic programming, and addresses several aspects. As for problem solving, we provide a formal definition of its kernel language, function-free disjunctive logic programs (also known as disjunctive datalog), extended by weak constraints, which are a powerful tool to express optimization problems. We then illustrate the usage of DLV as a tool for knowledge representation and reasoning, describing a new declarative programming methodology which allows one to encode complex problems (up to ΔP3-complete problems) in a declarative fashion. On the foundational side, we provide a detailed analysis of the computational complexity of the language of DLV, and by deriving new complexity results we chart a complete picture of the complexity of this language and important fragments thereof.Furthermore, we illustrate the general architecture of the DLV system, which has been influenced by these results. As for applications, we overview application front-ends which have been developed on top of DLV to solve specific knowledge representation tasks, and we briefly describe the main international projects investigating the potential of the system for industrial exploitation. Finally, we report about thorough experimentation and benchmarking, which has been carried out to assess the efficiency of the system. The experimental results confirm the solidity of DLV and highlight its potential for emerging application areas like knowledge management and information integration.",
"A novel logic program like language, weight constraint rules, is developed for answer set programming purposes. It generalizes normal logic programs by allowing weight constraints in place of literals to represent, e.g., cardinality and resource constraints and by providing optimization capabilities. A declarative semantics is developed which extends the stable model semantics of normal programs. The computational complexity of the language is shown to be similar to that of normal programs under the stable model semantics. A simple embedding of general weight constraint rules to a small subclass of the language called basic constraint rules is devised. An implementation of the language, the SMODELS system, is developed based on this embedding. It uses a two level architecture consisting of a front-end and a kernel language implementation. The front-end allows restricted use of variables and functions and compiles general weight constraint rules to basic constraint rules. A major part of the work is the development of an efficient search procedure for computing stable models for this kernel language. The procedure is compared with and empirically tested against satisfiability checkers and an implementation of the stable model semantics. It offers a competitive implementation of the stable model semantics for normal programs and attractive performance for problems where the new types of rules provide a compact representation.",
"We propose a new translation from normal logic programs with constraints under the answer set semantics to propositional logic. Given a normal logic program, we show that by adding, for each loop in the program, a corresponding loop formula to the program's completion, we obtain a one-to-one correspondence between the answer sets of the program and the models of the resulting propositional theory. In the worst case, there may be an exponential number of loops in a logic program. To address this problem, we propose an approach that adds loop formulas a few at a time, selectively. Based on these results, we implement a system called ASSAT(X), depending on the SAT solver X used, for computing one answer set of a normal logic program with constraints. We test the system on a variety of benchmarks including the graph coloring, the blocks world planning, and Hamiltonian Circuit domains. Our experimental results show that in these domains, for the task of generating one answer set of a normal logic program, our system has a clear edge over the state-of-art answer set programming systems Smodels and DLV.",
"Logic programming with the stable model semantics is put forward as a novel constraint programming paradigm. This paradigm is interesting because it bring advantages of logic programming based knowledge representation techniques to constraint programming and because implementation methods for the stable model semantics for ground (variabledfree) programs have advanced significantly in recent years. For a program with variables these methods need a grounding procedure for generating a variabledfree program. As a practical approach to handling the grounding problem a subclass of logic programs, domain restricted programs, is proposed. This subclass enables efficient grounding procedures and serves as a basis for integrating builtdin predicates and functions often needed in applications. It is shown that the novel paradigm embeds classical logical satisfiability and standard (finite domain) constraint satisfaction problems but seems to provide a more expressive framework from a knowledge representation point of view. The first steps towards a programming methodology for the new paradigm are taken by presenting solutions to standard constraint satisfaction problems, combinatorial graph problems and planning problems. An efficient implementation of the paradigm based on domain restricted programs has been developed. This is an extension of a previous implementation of the stable model semantics, the Smodels system, and is publicly available. It contains, e.g., builtdin integer arithmetic integrated to stable model computation. The implementation is described briefly and some test results illustrating the current level of performance are reported.",
""
]
}
|
1308.3568
|
2950158465
|
This paper is motivated by the comparison of genetic networks based on microarray samples. The aim is to test whether the differences observed between two inferred Gaussian graphical models come from real differences or arise from estimation uncertainties. Adopting a neighborhood approach, we consider a two-sample linear regression model with random design and propose a procedure to test whether these two regressions are the same. Relying on multiple testing and variable selection strategies, we develop a testing procedure that applies to high-dimensional settings where the number of covariates @math is larger than the number of observations @math and @math of the two samples. Both type I and type II errors are explicitely controlled from a non-asymptotic perspective and the test is proved to be minimax adaptive to the sparsity. The performances of the test are evaluated on simulated data. Moreover, we illustrate how this procedure can be used to compare genetic networks on Hess breast cancer microarray dataset.
|
The literature on high-dimensional two-sample tests is very light. In the context of high-dimensional two-sample comparison of means, @cite_3 @cite_42 @cite_34 @cite_2 have introduced global tests to compare the means of two high-dimensional Gaussian vectors with unknown variance. Recently, @cite_24 @cite_8 developped two-sample tests for covariance matrices of two high-dimensional vectors.
|
{
"cite_N": [
"@cite_8",
"@cite_42",
"@cite_3",
"@cite_24",
"@cite_2",
"@cite_34"
],
"mid": [
"2091825839",
"2050498681",
"",
"1992362262",
"2950371686",
"2081874429"
],
"abstract": [
"We propose two tests for the equality of covariance matrices between two high-dimensional populations. One test is on the whole variance-covariance matrices, and the other is on offdiagonal sub-matrices which define the covariance between two non-overlapping segments of the high-dimensional random vectors. The tests are applicable (i) when the data dimension is much larger than the sample sizes, namely the “large p, small n” situations and (ii) without assuming parametric distributions for the two populations. These two aspects surpass the capability of the conventional likelihood ratio test. The proposed tests can be used to test on covariances associated with gene ontology terms.",
"In this paper, we consider a test for the mean vector of independent and identically distributed multivariate normal random vectors where the dimension p is larger than or equal to the number of observations N. This test is invariant under scalar transformations of each component of the random vector. Theories and simulation results show that the proposed test is superior to other two tests available in the literature. Interest in such significance test for high-dimensional data is motivated by DNA microarrays. However, the methodology is valid for any application which involves high-dimensional data.",
"",
"In the high-dimensional setting, this article considers three interrelated problems: (a) testing the equality of two covariance matrices and ; (b) recovering the support of ; and (c) testing the equality of and row by row. We propose a new test for testing the hypothesis H 0: and investigate its theoretical and numerical properties. The limiting null distribution of the test statistic is derived and the power of the test is studied. The test is shown to enjoy certain optimality and to be especially powerful against sparse alternatives. The simulation results show that the test significantly outperforms the existing methods both in terms of size and power. Analysis of a prostate cancer dataset is carried out to demonstrate the application of the testing procedures. When the null hypothesis of equal covariance matrices is rejected, it is often of significant interest to further investigate how they differ from each other. Motivated by applications in genomics, we also consider recovering the support of and ...",
"We consider the hypothesis testing problem of detecting a shift between the means of two multivariate normal distributions in the high-dimensional setting, allowing for the data dimension p to exceed the sample size n. Specifically, we propose a new test statistic for the two-sample test of means that integrates a random projection with the classical Hotelling T^2 statistic. Working under a high-dimensional framework with (p,n) tending to infinity, we first derive an asymptotic power function for our test, and then provide sufficient conditions for it to achieve greater power than other state-of-the-art tests. Using ROC curves generated from synthetic data, we demonstrate superior performance against competing tests in the parameter regimes anticipated by our theoretical results. Lastly, we illustrate an advantage of our procedure's false positive rate with comparisons on high-dimensional gene expression data involving the discrimination of different types of cancer.",
"We proposed a two sample test for means of high dimensional data when the data dimension is much larger than the sample size. The classical Hotelling's @math test does not work for this large p, small n\" situation. The proposed test does not require explicit conditions on the relationship between the data dimension and sample size. This offers much flexibility in analyzing high dimensional data. An application of the proposed test is in testing significance for sets of genes, which we demonstrate in an empirical study on a Leukemia data set."
]
}
|
1308.3568
|
2950158465
|
This paper is motivated by the comparison of genetic networks based on microarray samples. The aim is to test whether the differences observed between two inferred Gaussian graphical models come from real differences or arise from estimation uncertainties. Adopting a neighborhood approach, we consider a two-sample linear regression model with random design and propose a procedure to test whether these two regressions are the same. Relying on multiple testing and variable selection strategies, we develop a testing procedure that applies to high-dimensional settings where the number of covariates @math is larger than the number of observations @math and @math of the two samples. Both type I and type II errors are explicitely controlled from a non-asymptotic perspective and the test is proved to be minimax adaptive to the sparsity. The performances of the test are evaluated on simulated data. Moreover, we illustrate how this procedure can be used to compare genetic networks on Hess breast cancer microarray dataset.
|
In contrast we build our testing strategy upon the global approach developed by @cite_48 and @cite_47 . A more detailed comparison of @cite_26 @cite_27 with our contribution is deferred to simulations (Section ) and discussion (Section ).
|
{
"cite_N": [
"@cite_48",
"@cite_47",
"@cite_26",
"@cite_27"
],
"mid": [
"2116029813",
"2075010327",
"1516142570",
""
],
"abstract": [
"We propose a new test, based on model selection methods, for testing that the expectation of a Gaussian vector with n independent components belongs to a linear subspace of R n against a nonparametric alternative. The testing procedure is available when the variance of the observations is unknown and does not depend on any prior information on the alternative. The properties of the test are nonasymptotic and we prove that the test is rate optimal [up to a possible log(n) factor] over various classes of alternatives simultaneously. We also provide a simulation study in order to evaluate the procedure when the purpose is to test goodness-of-fit in a regression model. 1. Introduction. We consider the regression model",
"Let @math be a zero mean Gaussian vector and @math be a subset of @math . Suppose we are given @math i.i.d. replications of the vector @math . We propose a new test for testing that @math is independent of @math conditionally to @math against the general alternative that it is not. This procedure does not depend on any prior information on the covariance of @math or the variance of @math and applies in a high-dimensional setting. It straightforwardly extends to test the neighbourhood of a Gaussian graphical model. The procedure is based on a model of Gaussian regression with random Gaussian covariates. We give non asymptotic properties of the test and we prove that it is rate optimal (up to a possible @math factor) over various classes of alternatives under some additional assumptions. Besides, it allows us to derive non asymptotic minimax rates of testing in this setting. Finally, we carry out a simulation study in order to evaluate the performance of our procedure.",
"We propose novel methodology for testing equality of model parameters between two high-dimensional populations. The technique is very general and applicable to a wide range of models. The method is based on sample splitting: the data is split into two parts; on the first part we reduce the dimensionality of the model to a manageable size; on the second part we perform significance testing (p-value calculation) based on a restricted likelihood ratio statistic. Assuming that both populations arise from the same distribution, we show that the restricted likelihood ratio statistic is asymptotically distributed as a weighted sum of chi-squares with weights which can be efficiently estimated from the data. In high-dimensional problems, a single data split can result in a \"p-value lottery\". To ameliorate this effect, we iterate the splitting process and aggregate the resulting p-values. This multi-split approach provides improved p-values. We illustrate the use of our general approach in two-sample comparisons of high-dimensional regression models (\"differential regression\") and graphical models (\"differential network\"). In both cases we show results on simulated data as well as real data from recent, high-throughput cancer studies.",
""
]
}
|
1308.2921
|
2953385865
|
Participatory sensing is emerging as an innovative computing paradigm that targets the ubiquity of always-connected mobile phones and their sensing capabilities. In this context, a multitude of pioneering applications increasingly carry out pervasive collection and dissemination of information and environmental data, such as, traffic conditions, pollution, temperature, etc. Participants collect and report measurements from their mobile devices and entrust them to the cloud to be made available to applications and users. Naturally, due to the personal information associated to the reports (e.g., location, movements, etc.), a number of privacy concerns need to be taken into account prior to a large-scale deployment of these applications. Motivated by the need for privacy protection in Participatory Sensing, this work presents PEPSI: a Privacy-Enhanced Participatory Sensing Infrastructure. We explore realistic architectural assumptions and a minimal set of formal requirements aiming at protecting privacy of both data producers and consumers. We propose two instantiations that attain privacy guarantees with provable security at very low additional computational cost and almost no extra communication overhead.
|
Participatory sensing has attracted, in recent years, great interest from the research community. Security and privacy challenges have been widely discussed in @cite_39 , @cite_1 , @cite_21 , @cite_12 but none of them has proposed actual solutions. To the best of our knowledge, AnonySense @cite_17 (later extended in @cite_24 and @cite_40 ) is the only result to address privacy-related problems, hence, it is most related to our work. AnonySense leverages Mix Network techniques @cite_31 and provides @math -anonymity @cite_16 , while @cite_40 shows how to modify the original AnonySense, to achieve @math -diversity @cite_26 .
|
{
"cite_N": [
"@cite_26",
"@cite_21",
"@cite_1",
"@cite_39",
"@cite_24",
"@cite_40",
"@cite_31",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"2134167315",
"2060610945",
"2067917276",
"2034401621",
"2151940804",
"2109784850",
"2103647628",
"2159024459",
"2014480560",
"2146697949"
],
"abstract": [
"Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called k-anonymity has gained popularity. In a k-anonymized dataset, each record is indistinguishable from at least k − 1 other records with respect to certain identifying attributes. In this article, we show using two simple attacks that a k-anonymized dataset has some subtle but severe privacy problems. First, an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. This is a known problem. Second, attackers often have background knowledge, and we show that k-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks, and we propose a novel and powerful privacy criterion called e-diversity that can defend against such attacks. In addition to building a formal foundation for e-diversity, we show in an experimental evaluation that e-diversity is practical and can be implemented efficiently.",
"",
"We study the security challenges that arise in opportunistic people-centric sensing, a new sensing paradigm leveraging humans as part of the sensing infrastructure. Most prior sensor-network research has focused on collecting and processing environmental data using a static topology and an application-aware infrastructure, whereas opportunistic sensing involves collecting, storing, processing and fusing large volumes of data related to everyday human activities. This highly dynamic and mobile setting, where humans are the central focus, presents new challenges for information security, because data originates from sensors carried by people— not tiny sensors thrown in the forest or attached to animals. In this paper we aim to instigate discussion of this critical issue, because opportunistic people-centric sensing will never succeed without adequate provisions for security and privacy. To that end, we outline several important challenges and suggest general solutions that hold promise in this new sensing paradigm.",
"Participatory sensing technologies could improve our lives and our communities, but at what cost to our privacy?",
"We describe AnonySense, a privacy-aware system for realizing pervasive applications based on collaborative, opportunistic sensing by personal mobile devices. AnonySense allows applications to submit sensing tasks to be distributed across participating mobile devices, later receiving verified, yet anonymized, sensor data reports back from the field, thus providing the first secure implementation of this participatory sensing model. We describe our security goals, threat model, and the architecture and protocols of AnonySense. We also describe how AnonySense can support extended security features that can be useful for different applications. We evaluate the security and feasibility of AnonySense through security analysis and prototype implementation. We show the feasibility of our approach through two plausible applications: a Wi-Fi rogue access point detector and a lost-object finder.",
"The ubiquity of mobile devices has brought forth the concept of participatory sensing, whereby ordinary citizens can now contribute and share information from the urban environment. However, such applications introduce a key research challenge: preserving the privacy of the individuals contributing data. In this paper, we study two different privacy concepts, k-anonymity and l-diversity, and demonstrate how their privacy models can be applied to protect users' spatial and temporal privacy in the context of participatory sensing. The first part of the paper focuses on schemes implementing k-anonymity. We propose the use of microaggregation, a technique used for facilitating disclosure control in databases, as an alternate to tessellation, which is the current state-of-the-art for location privacy in participatory sensing applications. We conduct a comparative study of the two techniques and demonstrate that each has its advantage in certain mutually exclusive situations. We then propose the Hybrid Variable size Maximum Distance to Average Vector (Hybrid-VMDAV) algorithm, which combines the positive aspects of microaggregation and tessellation. The second part of the paper addresses the limitations of the k-anonymity privacy model. We employ the principle of l-diversity and propose an l-diverse version of VMDAV (LD-VMDAV) as an improvement. In particular, LD-VMDAV is robust in situations where an adversary may have gained partial knowledge about certain attributes of the victim. We evaluate the performances of our proposed techniques using real-world traces. Our results show that Hybrid-VMDAV improves the percentage of positive identifications made by an application server by up to 100 and decreases the amount of information loss by about 40 . We empirically show that LD-VMDAV always outperforms its k-anonymity counterpart. In particular, it improves the ability of the applications to accurately interpret the anonymized location and time included in user reports. Our studies also confirm that perturbing the true locations of the users with random Gaussian noise can provide an extra layer of protection, while causing little impact on the application performance.",
"A technique based on public key cryptography is presented that allows an electronic mail system to hide who a participant communicates with as well as the content of the communication - in spite of an unsecured underlying telecommunication system. The technique does not require a universally trusted authority. One correspondent can remain anonymous to a second, while allowing the second to respond via an untraceable return address. The technique can also be used to form rosters of untraceable digital pseudonyms from selected applications. Applicants retain the exclusive ability to form digital signatures corresponding to their pseudonyms. Elections in which any interested party can verify that the ballots have been properly counted are possible if anonymously mailed ballots are signed with pseudonyms from a roster of registered voters. Another use allows an individual to correspond with a record-keeping organization under a unique pseudonym, which appears in a roster of acceptable clients.",
"Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.",
"Abstract: The presence of multimodal sensors on current mobile phones enables a broad range of novel mobile applications. Environmental and user-centric sensor data of unprecedented quantity and quality can be captured and reported by a possible user base of billions of mobile phone subscribers worldwide. The strong focus on the collection of detailed sensor data may however compromise user privacy in various regards, e.g., by tracking a user's current location. In this survey, we identify the sensing modalities used in current participatory sensing applications, and assess the threats to user privacy when personal information is sensed and disclosed. We outline how privacy aspects are addressed in existing sensing applications, and determine the adequacy of the solutions under real-world conditions. Finally, we present countermeasures from related research fields, and discuss their applicability in participatory sensing scenarios. Based on our findings, we identify open issues and outline possible solutions to guarantee user privacy in participatory sensing.",
"Personal mobile devices are increasingly equipped with the capability to sense the physical world (through cameras, microphones, and accelerometers, for example) and the, network world (with Wi-Fi and Bluetooth interfaces). Such devices offer many new opportunities for cooperative sensing applications. For example, users' mobile phones may contribute data to community-oriented information services, from city-wide pollution monitoring to enterprise-wide detection of unauthorized Wi-Fi access points. This people-centric mobile-sensing model introduces a new security challenge in the design of mobile systems: protecting the privacy of participants while allowing their devices to reliably contribute high-quality data to these large-scale applications. We describe AnonySense, a privacy-aware architecture for realizing pervasive applications based on collaborative, opportunistic sensing by personal mobile devices. AnonySense allows applications to submit sensing tasks that will be distributed across anonymous participating mobile devices, later receiving verified, yet anonymized, sensor data reports back from the field, thus providing the first secure implementation of this participatory sensing model. We describe our trust model, and the security properties that drove the design of the AnonySense system. We evaluate our prototype implementation through experiments that indicate the feasibility of this approach, and through two applications: a Wi-Fi rogue access point detector and a lost-object finder."
]
}
|
1308.2921
|
2953385865
|
Participatory sensing is emerging as an innovative computing paradigm that targets the ubiquity of always-connected mobile phones and their sensing capabilities. In this context, a multitude of pioneering applications increasingly carry out pervasive collection and dissemination of information and environmental data, such as, traffic conditions, pollution, temperature, etc. Participants collect and report measurements from their mobile devices and entrust them to the cloud to be made available to applications and users. Naturally, due to the personal information associated to the reports (e.g., location, movements, etc.), a number of privacy concerns need to be taken into account prior to a large-scale deployment of these applications. Motivated by the need for privacy protection in Participatory Sensing, this work presents PEPSI: a Privacy-Enhanced Participatory Sensing Infrastructure. We explore realistic architectural assumptions and a minimal set of formal requirements aiming at protecting privacy of both data producers and consumers. We propose two instantiations that attain privacy guarantees with provable security at very low additional computational cost and almost no extra communication overhead.
|
Both @cite_17 and @cite_40 guarantee report integrity using group signatures (i.e., all sensors share the same group key to sign reports) and provide limited confidentiality, as reports are encrypted under the public key of a , a trusted party responsible for collecting reports and distributing them to queriers.
|
{
"cite_N": [
"@cite_40",
"@cite_17"
],
"mid": [
"2109784850",
"2146697949"
],
"abstract": [
"The ubiquity of mobile devices has brought forth the concept of participatory sensing, whereby ordinary citizens can now contribute and share information from the urban environment. However, such applications introduce a key research challenge: preserving the privacy of the individuals contributing data. In this paper, we study two different privacy concepts, k-anonymity and l-diversity, and demonstrate how their privacy models can be applied to protect users' spatial and temporal privacy in the context of participatory sensing. The first part of the paper focuses on schemes implementing k-anonymity. We propose the use of microaggregation, a technique used for facilitating disclosure control in databases, as an alternate to tessellation, which is the current state-of-the-art for location privacy in participatory sensing applications. We conduct a comparative study of the two techniques and demonstrate that each has its advantage in certain mutually exclusive situations. We then propose the Hybrid Variable size Maximum Distance to Average Vector (Hybrid-VMDAV) algorithm, which combines the positive aspects of microaggregation and tessellation. The second part of the paper addresses the limitations of the k-anonymity privacy model. We employ the principle of l-diversity and propose an l-diverse version of VMDAV (LD-VMDAV) as an improvement. In particular, LD-VMDAV is robust in situations where an adversary may have gained partial knowledge about certain attributes of the victim. We evaluate the performances of our proposed techniques using real-world traces. Our results show that Hybrid-VMDAV improves the percentage of positive identifications made by an application server by up to 100 and decreases the amount of information loss by about 40 . We empirically show that LD-VMDAV always outperforms its k-anonymity counterpart. In particular, it improves the ability of the applications to accurately interpret the anonymized location and time included in user reports. Our studies also confirm that perturbing the true locations of the users with random Gaussian noise can provide an extra layer of protection, while causing little impact on the application performance.",
"Personal mobile devices are increasingly equipped with the capability to sense the physical world (through cameras, microphones, and accelerometers, for example) and the, network world (with Wi-Fi and Bluetooth interfaces). Such devices offer many new opportunities for cooperative sensing applications. For example, users' mobile phones may contribute data to community-oriented information services, from city-wide pollution monitoring to enterprise-wide detection of unauthorized Wi-Fi access points. This people-centric mobile-sensing model introduces a new security challenge in the design of mobile systems: protecting the privacy of participants while allowing their devices to reliably contribute high-quality data to these large-scale applications. We describe AnonySense, a privacy-aware architecture for realizing pervasive applications based on collaborative, opportunistic sensing by personal mobile devices. AnonySense allows applications to submit sensing tasks that will be distributed across anonymous participating mobile devices, later receiving verified, yet anonymized, sensor data reports back from the field, thus providing the first secure implementation of this participatory sensing model. We describe our trust model, and the security properties that drove the design of the AnonySense system. We evaluate our prototype implementation through experiments that indicate the feasibility of this approach, and through two applications: a Wi-Fi rogue access point detector and a lost-object finder."
]
}
|
1308.2921
|
2953385865
|
Participatory sensing is emerging as an innovative computing paradigm that targets the ubiquity of always-connected mobile phones and their sensing capabilities. In this context, a multitude of pioneering applications increasingly carry out pervasive collection and dissemination of information and environmental data, such as, traffic conditions, pollution, temperature, etc. Participants collect and report measurements from their mobile devices and entrust them to the cloud to be made available to applications and users. Naturally, due to the personal information associated to the reports (e.g., location, movements, etc.), a number of privacy concerns need to be taken into account prior to a large-scale deployment of these applications. Motivated by the need for privacy protection in Participatory Sensing, this work presents PEPSI: a Privacy-Enhanced Participatory Sensing Infrastructure. We explore realistic architectural assumptions and a minimal set of formal requirements aiming at protecting privacy of both data producers and consumers. We propose two instantiations that attain privacy guarantees with provable security at very low additional computational cost and almost no extra communication overhead.
|
There is also additional research work that focuses on slightly related problems. For instance, @cite_20 argues that privacy issues can be addressed if each user has access to a private server (e.g., a virtual machine hosted by a cloud service) and uses it as a proxy between her sensors and applications requesting her data. Nevertheless, the feasibility of the approach in large scale participatory sensing applications would be severely limited by cost and availability of per-user proxies.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"2169270531"
],
"abstract": [
"People increasingly generate content on their mobile devices and upload it to third-party services such as Facebook and Google Latitude for sharing and backup purposes. Although these services are convenient and useful, their use has important privacy implications due to their centralized nature and their acquisitions of rights to user-contributed content. This paper argues that people's interests would be be better served by uploading their data to a machine that they themselves own and control. We term these machines Virtual Individual Servers (VISs) because our preferred instantiation is a virtual machine running in a highly-available utility computing infrastructure. By using VISs, people can better protect their privacy because they retain ownership of their data and remain in control over the software and policies that determine what data is shared with whom. This paper also describes a range of applications of VIS proxies. It then presents our initial implementation and evaluation of one of these applications, a decentralized framework for mobile social services based on VISs. Our experience so far suggests that building such applications on top of the VIS concept is feasible and desirable."
]
}
|
1308.2921
|
2953385865
|
Participatory sensing is emerging as an innovative computing paradigm that targets the ubiquity of always-connected mobile phones and their sensing capabilities. In this context, a multitude of pioneering applications increasingly carry out pervasive collection and dissemination of information and environmental data, such as, traffic conditions, pollution, temperature, etc. Participants collect and report measurements from their mobile devices and entrust them to the cloud to be made available to applications and users. Naturally, due to the personal information associated to the reports (e.g., location, movements, etc.), a number of privacy concerns need to be taken into account prior to a large-scale deployment of these applications. Motivated by the need for privacy protection in Participatory Sensing, this work presents PEPSI: a Privacy-Enhanced Participatory Sensing Infrastructure. We explore realistic architectural assumptions and a minimal set of formal requirements aiming at protecting privacy of both data producers and consumers. We propose two instantiations that attain privacy guarantees with provable security at very low additional computational cost and almost no extra communication overhead.
|
Other proposals, such as @cite_29 and @cite_41 , aim at guaranteeing integrity and authenticity of user-generated contents, by employing Trusted Platform Modules (TPMs).
|
{
"cite_N": [
"@cite_41",
"@cite_29"
],
"mid": [
"2036110521",
"2145060596"
],
"abstract": [
"Commodity mobile devices have been utilized as sensor nodes in a variety of domains, including citizen journalism, mobile social services, and domestic eldercare. In each of these domains, data integrity and device-owners' privacy are first-class concerns, but current approaches to secure sensing fail to balance these properties. External signing infrastructure cannot attest to the values generated by a device's sensing hardware, while trusted sensing hardware does not allow users to securely reduce the fidelity of readings in order to preserve their privacy. In this paper we examine the challenges posed by the potentially conflicting goals of data integrity and user privacy and propose a trustworthy mobile sensing platform which leverages inexpensive commodity Trusted Platform Module (TPM) hardware.",
"Grassroots Participatory Sensing empowers people to collect and share sensor data using mobile devices across many applications, spanning intelligent transportation, air quality monitoring and social networking. In this paper, we argue that the very openness of such a system makes it vulnerable to abuse by malicious users who may poison the information, collude to fabricate information, or launch Sybils to distort that information. We propose and implement a novel trusted platform module (TPM), or angel based system that addresses the problem of providing sensor data integrity. The key idea is to provide a trusted platform within each sensor device to attest the integrity of sensor readings. We argue that this localizes integrity checking to the device, rather than relying on corroboration, making the system not only simpler, but also resistant to collusion and data poisoning. A \"burned-in\" private key in the TPM prevents users from launching Sybils. We also make the case for content protection and access control mechanisms that enable users to publish sensor data streams to selected groups of people and address it using broadcast encryption techniques."
]
}
|
1308.2921
|
2953385865
|
Participatory sensing is emerging as an innovative computing paradigm that targets the ubiquity of always-connected mobile phones and their sensing capabilities. In this context, a multitude of pioneering applications increasingly carry out pervasive collection and dissemination of information and environmental data, such as, traffic conditions, pollution, temperature, etc. Participants collect and report measurements from their mobile devices and entrust them to the cloud to be made available to applications and users. Naturally, due to the personal information associated to the reports (e.g., location, movements, etc.), a number of privacy concerns need to be taken into account prior to a large-scale deployment of these applications. Motivated by the need for privacy protection in Participatory Sensing, this work presents PEPSI: a Privacy-Enhanced Participatory Sensing Infrastructure. We explore realistic architectural assumptions and a minimal set of formal requirements aiming at protecting privacy of both data producers and consumers. We propose two instantiations that attain privacy guarantees with provable security at very low additional computational cost and almost no extra communication overhead.
|
No provable privacy. User privacy in previous work (e.g., @cite_17 @cite_40 ) relies on Mix Networks @cite_31 , an anonymizing technique used to de-link submitted reports from their origin, before they reach applications. In other words, a Mix Network acts as an anonymizing proxy and forwards user reports only when the set of received reports satisfies a system-defined criteria. Privacy metrics such as @math -anonymity @cite_16 or @math -diversity @cite_26 have been defined to characterize privacy through Mix Networks. For example, a Mix Network that provides @math -anonymity batches'' reports so that it is not possible to link a given report to its sender, among a set of @math reports. Clearly, anonymity is not guaranteed but it rather depends on the number of reports received and mixed'' by the Mix Network. Moreover, there could be scenarios where a relatively long time could pass before the desired level of anonymity is reached (when enough'' reports have been collected).
|
{
"cite_N": [
"@cite_26",
"@cite_40",
"@cite_31",
"@cite_16",
"@cite_17"
],
"mid": [
"2134167315",
"2109784850",
"2103647628",
"2159024459",
"2146697949"
],
"abstract": [
"Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called k-anonymity has gained popularity. In a k-anonymized dataset, each record is indistinguishable from at least k − 1 other records with respect to certain identifying attributes. In this article, we show using two simple attacks that a k-anonymized dataset has some subtle but severe privacy problems. First, an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. This is a known problem. Second, attackers often have background knowledge, and we show that k-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks, and we propose a novel and powerful privacy criterion called e-diversity that can defend against such attacks. In addition to building a formal foundation for e-diversity, we show in an experimental evaluation that e-diversity is practical and can be implemented efficiently.",
"The ubiquity of mobile devices has brought forth the concept of participatory sensing, whereby ordinary citizens can now contribute and share information from the urban environment. However, such applications introduce a key research challenge: preserving the privacy of the individuals contributing data. In this paper, we study two different privacy concepts, k-anonymity and l-diversity, and demonstrate how their privacy models can be applied to protect users' spatial and temporal privacy in the context of participatory sensing. The first part of the paper focuses on schemes implementing k-anonymity. We propose the use of microaggregation, a technique used for facilitating disclosure control in databases, as an alternate to tessellation, which is the current state-of-the-art for location privacy in participatory sensing applications. We conduct a comparative study of the two techniques and demonstrate that each has its advantage in certain mutually exclusive situations. We then propose the Hybrid Variable size Maximum Distance to Average Vector (Hybrid-VMDAV) algorithm, which combines the positive aspects of microaggregation and tessellation. The second part of the paper addresses the limitations of the k-anonymity privacy model. We employ the principle of l-diversity and propose an l-diverse version of VMDAV (LD-VMDAV) as an improvement. In particular, LD-VMDAV is robust in situations where an adversary may have gained partial knowledge about certain attributes of the victim. We evaluate the performances of our proposed techniques using real-world traces. Our results show that Hybrid-VMDAV improves the percentage of positive identifications made by an application server by up to 100 and decreases the amount of information loss by about 40 . We empirically show that LD-VMDAV always outperforms its k-anonymity counterpart. In particular, it improves the ability of the applications to accurately interpret the anonymized location and time included in user reports. Our studies also confirm that perturbing the true locations of the users with random Gaussian noise can provide an extra layer of protection, while causing little impact on the application performance.",
"A technique based on public key cryptography is presented that allows an electronic mail system to hide who a participant communicates with as well as the content of the communication - in spite of an unsecured underlying telecommunication system. The technique does not require a universally trusted authority. One correspondent can remain anonymous to a second, while allowing the second to respond via an untraceable return address. The technique can also be used to form rosters of untraceable digital pseudonyms from selected applications. Applicants retain the exclusive ability to form digital signatures corresponding to their pseudonyms. Elections in which any interested party can verify that the ballots have been properly counted are possible if anonymously mailed ballots are signed with pseudonyms from a roster of registered voters. Another use allows an individual to correspond with a record-keeping organization under a unique pseudonym, which appears in a roster of acceptable clients.",
"Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.",
"Personal mobile devices are increasingly equipped with the capability to sense the physical world (through cameras, microphones, and accelerometers, for example) and the, network world (with Wi-Fi and Bluetooth interfaces). Such devices offer many new opportunities for cooperative sensing applications. For example, users' mobile phones may contribute data to community-oriented information services, from city-wide pollution monitoring to enterprise-wide detection of unauthorized Wi-Fi access points. This people-centric mobile-sensing model introduces a new security challenge in the design of mobile systems: protecting the privacy of participants while allowing their devices to reliably contribute high-quality data to these large-scale applications. We describe AnonySense, a privacy-aware architecture for realizing pervasive applications based on collaborative, opportunistic sensing by personal mobile devices. AnonySense allows applications to submit sensing tasks that will be distributed across anonymous participating mobile devices, later receiving verified, yet anonymized, sensor data reports back from the field, thus providing the first secure implementation of this participatory sensing model. We describe our trust model, and the security properties that drove the design of the AnonySense system. We evaluate our prototype implementation through experiments that indicate the feasibility of this approach, and through two applications: a Wi-Fi rogue access point detector and a lost-object finder."
]
}
|
1308.2921
|
2953385865
|
Participatory sensing is emerging as an innovative computing paradigm that targets the ubiquity of always-connected mobile phones and their sensing capabilities. In this context, a multitude of pioneering applications increasingly carry out pervasive collection and dissemination of information and environmental data, such as, traffic conditions, pollution, temperature, etc. Participants collect and report measurements from their mobile devices and entrust them to the cloud to be made available to applications and users. Naturally, due to the personal information associated to the reports (e.g., location, movements, etc.), a number of privacy concerns need to be taken into account prior to a large-scale deployment of these applications. Motivated by the need for privacy protection in Participatory Sensing, this work presents PEPSI: a Privacy-Enhanced Participatory Sensing Infrastructure. We explore realistic architectural assumptions and a minimal set of formal requirements aiming at protecting privacy of both data producers and consumers. We propose two instantiations that attain privacy guarantees with provable security at very low additional computational cost and almost no extra communication overhead.
|
Multiple Semi-Trusted Parties. Trust relations are difficult to define and set up in scenarios with multiple parties. Hence, it is advisable to minimize the number of trusted parties and the degree to which they are trusted. Available techniques to protect privacy in participatory sensing often involve many semi-trusted independent parties, that are always assumed not to collude. Anonysense @cite_17 , besides Mobile Nodes, Registration Authority, and WiFi Access Points, also assumes the presence and the non-collusion of a Task Service (used to distribute tasks to users), a Report Service (to receive reports from sensors), and several Mix Network nodes (i.e., a trusted anonymizing infrastructure). The assumption of multiple non-colluding parties raises severe concerns regarding its practicality and feasibility. It appears difficult to deploy all of the parties in a real world setting where entities provide services only in exchange of some benefit. For instance, it is not clear how to deploy the Task and the Report services as two separate entities having no incentive to collude.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2146697949"
],
"abstract": [
"Personal mobile devices are increasingly equipped with the capability to sense the physical world (through cameras, microphones, and accelerometers, for example) and the, network world (with Wi-Fi and Bluetooth interfaces). Such devices offer many new opportunities for cooperative sensing applications. For example, users' mobile phones may contribute data to community-oriented information services, from city-wide pollution monitoring to enterprise-wide detection of unauthorized Wi-Fi access points. This people-centric mobile-sensing model introduces a new security challenge in the design of mobile systems: protecting the privacy of participants while allowing their devices to reliably contribute high-quality data to these large-scale applications. We describe AnonySense, a privacy-aware architecture for realizing pervasive applications based on collaborative, opportunistic sensing by personal mobile devices. AnonySense allows applications to submit sensing tasks that will be distributed across anonymous participating mobile devices, later receiving verified, yet anonymized, sensor data reports back from the field, thus providing the first secure implementation of this participatory sensing model. We describe our trust model, and the security properties that drove the design of the AnonySense system. We evaluate our prototype implementation through experiments that indicate the feasibility of this approach, and through two applications: a Wi-Fi rogue access point detector and a lost-object finder."
]
}
|
1308.3177
|
1516694848
|
Normalized Google distance (NGD) is a relative semantic distance based on the World Wide Web (or any other large electronic database, for instance Wikipedia) and a search engine that returns aggregate page counts. The earlier NGD between pairs of search terms (including phrases) is not sufficient for all applications. We propose an NGD of finite multisets of search terms that is better for many applications. This gives a relative semantics shared by a multiset of search terms. We give applications and compare the results with those obtained using the pairwise NGD. The derivation of NGD method is based on Kolmogorov complexity.
|
In @cite_0 the notion is introduced of the information required to go from any object in a finite multiset of objects to any other object in the multiset. Let @math denote a finite multiset of @math finite binary strings defined by (abusing the set notation) @math , the constituting elements (not necessarily all different) ordered length-increasing lexicographic. We use multisets and not sets, since in a set all elements are different while here we are interested in the situation were some or all of the elements are equal. Let @math be the reference universal Turing machine, for convenience the prefix one @cite_15 . We define the information distance in @math by @math for all @math . It is shown in @cite_0 , Theorem 2, that up to a logarithmic additive term.
|
{
"cite_N": [
"@cite_0",
"@cite_15"
],
"mid": [
"2024369093",
"1638203394"
],
"abstract": [
"If Kolmogorov complexity [25] measures information in one object and Information Distance measures information shared by two objects, how do we measure information shared by many objects? This paper provides an initial pragmatic study of this fundamental data mining question. Firstly, Em(x1,x2,...,xn) is defined to be the minimum amount of thermodynamic energy needed to convert from any xi to any xj. With this definition several theoretical problems have been solved. Second, our newly proposed theory is applied to select a comprehensive review and a specialized review from many reviews: (1) Core feature words, expanded words and dependent words are extracted respectively. (2) Comprehensive and specialized reviews are selected according to the information among them. This method of selecting a single review can be extended to select multiple reviews as well. Finally, experiments show that this comprehensive and specialized review mining method based on our new theory can do the job efficiently.",
"The book is outstanding and admirable in many respects. ... is necessary reading for all kinds of readers from undergraduate students to top authorities in the field. Journal of Symbolic Logic Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity. The book presents a thorough treatment of the subject with a wide range of illustrative applications. Such applications include the randomness of finite objects or infinite sequences, Martin-Loef tests for randomness, information theory, computational learning theory, the complexity of algorithms, and the thermodynamics of computing. It will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics. The book is self-contained in that it contains the basic requirements from mathematics and computer science. Included are also numerous problem sets, comments, source references, and hints to solutions of problems. New topics in this edition include Omega numbers, KolmogorovLoveland randomness, universal learning, communication complexity, Kolmogorov's random graphs, time-limited universal distribution, Shannon information and others."
]
}
|
1308.3177
|
1516694848
|
Normalized Google distance (NGD) is a relative semantic distance based on the World Wide Web (or any other large electronic database, for instance Wikipedia) and a search engine that returns aggregate page counts. The earlier NGD between pairs of search terms (including phrases) is not sufficient for all applications. We propose an NGD of finite multisets of search terms that is better for many applications. This gives a relative semantics shared by a multiset of search terms. We give applications and compare the results with those obtained using the pairwise NGD. The derivation of NGD method is based on Kolmogorov complexity.
|
The information distance in @cite_2 between strings @math and @math is denoted @math . In @cite_12 we introduced the notation @math so that @math . The two notations coincide for @math since @math up to an additive constant term. The quantity @math is called the information distance in @math . It comes in two flavors: the pairwise version for @math and the multiset version for @math . The normalized pairwise version was made computable using real-world compressors to approximate the incomputable Kolmogorov complexity. Called the normalized compression distance (NCD) it has turned out to be suitable for determining similarity between pairs of objects, for phylogeny, hierarchical clustering, heterogeneous data clustering, anomaly detection, and so on @cite_3 @cite_10 . Applications abound. In @cite_19 the name case for pairs was resolved by using the World Wide Web as database and Google as query mechanism (or any other search engine that give an aggregate page count). Viewing the search engine as a compressor and using the NCD formula this gives many new applications.
|
{
"cite_N": [
"@cite_3",
"@cite_19",
"@cite_2",
"@cite_10",
"@cite_12"
],
"mid": [
"2144221002",
"2165897980",
"",
"2128859735",
"2148624041"
],
"abstract": [
"A new class of distances appropriate for measuring similarity relations between sequences, say one type of similarity per distance, is studied. We propose a new \"normalized information distance,\" based on the noncomputable notion of Kolmogorov complexity, and show that it is in this class and it minorizes every computable distance in the class (that is, it is universal in that it discovers all computable similarities). We demonstrate that it is a metric and call it the similarity metric . This theory forms the foundation for a new practical tool. To evidence generality and robustness, we give two distinctive applications in widely divergent areas using standard compression programs like gzip and GenCompress. First, we compare whole mitochondrial genomes and infer their evolutionary history. This results in a first completely automatic computed whole mitochondrial phylogeny tree. Secondly, we fully automatically compute the language tree of 52 different languages.",
"Words and phrases acquire meaning from the way they are used in society, from their relative semantics to other words and phrases. For computers, the equivalent of \"society\" is \"database,\" and the equivalent of \"use\" is \"a way to search the database\". We present a new theory of similarity between words and phrases based on information distance and Kolmogorov complexity. To fix thoughts, we use the World Wide Web (WWW) as the database, and Google as the search engine. The method is also applicable to other search engines and databases. This theory is then applied to construct a method to automatically extract similarity, the Google similarity distance, of words and phrases from the WWW using Google page counts. The WWW is the largest database on earth, and the context information entered by millions of independent users averages out to provide automatic semantics of useful quality. We give applications in hierarchical clustering, classification, and language translation. We give examples to distinguish between colors and numbers, cluster names of paintings by 17th century Dutch masters and names of books by English novelists, the ability to understand emergencies and primes, and we demonstrate the ability to do a simple automatic English-Spanish translation. Finally, we use the WordNet database as an objective baseline against which to judge the performance of our method. We conduct a massive randomized trial in binary classification using support vector machines to learn categories based on our Google distance, resulting in an a mean agreement of 87 percent with the expert crafted WordNet categories",
"",
"We present a new method for clustering based on compression. The method does not use subject-specific features or background knowledge, and works as follows: First, we determine a parameter-free, universal, similarity distance, the normalized compression distance or NCD, computed from the lengths of compressed data files (singly and in pairwise concatenation). Second, we apply a hierarchical clustering method. The NCD is not restricted to a specific application area, and works across application area boundaries. A theoretical precursor, the normalized information distance, co-developed by one of the authors, is provably optimal. However, the optimality comes at the price of using the noncomputable notion of Kolmogorov complexity. We propose axioms to capture the real-world setting, and show that the NCD approximates optimality. To extract a hierarchy of clusters from the distance matrix, we determine a dendrogram (ternary tree) by a new quartet method and a fast heuristic to implement it. The method is implemented and available as public software, and is robust under choice of different compressors. To substantiate our claims of universality and robustness, we report evidence of successful application in areas as diverse as genomics, virology, languages, literature, music, handwritten digits, astronomy, and combinations of objects from completely different domains, using statistical, dictionary, and block sorting compressors. In genomics, we presented new evidence for major questions in Mammalian evolution, based on whole-mitochondrial genomic analysis: the Eutherian orders and the Marsupionta hypothesis against the Theria hypothesis.",
"Information distance is a parameter-free similarity measure based on compression, used in pattern recognition, data mining, phylogeny, clustering and classification. The notion of information distance is extended from pairs to multiples (finite lists). We study maximal overlap, metricity, universality, minimal overlap, additivity and normalized information distance in multiples. We use the theoretical notion of Kolmogorov complexity which for practical purposes is approximated by the length of the compressed version of the file involved, using a real-world compression program."
]
}
|
1308.3177
|
1516694848
|
Normalized Google distance (NGD) is a relative semantic distance based on the World Wide Web (or any other large electronic database, for instance Wikipedia) and a search engine that returns aggregate page counts. The earlier NGD between pairs of search terms (including phrases) is not sufficient for all applications. We propose an NGD of finite multisets of search terms that is better for many applications. This gives a relative semantics shared by a multiset of search terms. We give applications and compare the results with those obtained using the pairwise NGD. The derivation of NGD method is based on Kolmogorov complexity.
|
The theory of information distance for multisets insofar it was not treated in @cite_0 was given in @cite_12 . In @cite_16 the @math distance of nonempty finite multisets was normalized and approximated by real-world compressors. The result is the normalized compression distance (NCD) for multisets. The @math where @math is a multiset is shown to be a metric with values in between 0 and 1. The developed theory was applied to classification questions concerning the fate of retinal progenitor cells, synthetic versions, organelle transport, and handwritten character recognition (a problem in OCR). In all cases the results were significantly better than using the pairwise NCD except for the OCR problem where a combination of the two approaches gave 99.6 on MNIST data. The current state of the art classifier for MNIST data achieves 99.77
|
{
"cite_N": [
"@cite_0",
"@cite_16",
"@cite_12"
],
"mid": [
"2024369093",
"2127955369",
"2148624041"
],
"abstract": [
"If Kolmogorov complexity [25] measures information in one object and Information Distance measures information shared by two objects, how do we measure information shared by many objects? This paper provides an initial pragmatic study of this fundamental data mining question. Firstly, Em(x1,x2,...,xn) is defined to be the minimum amount of thermodynamic energy needed to convert from any xi to any xj. With this definition several theoretical problems have been solved. Second, our newly proposed theory is applied to select a comprehensive review and a specialized review from many reviews: (1) Core feature words, expanded words and dependent words are extracted respectively. (2) Comprehensive and specialized reviews are selected according to the information among them. This method of selecting a single review can be extended to select multiple reviews as well. Finally, experiments show that this comprehensive and specialized review mining method based on our new theory can do the job efficiently.",
"Pairwise normalized compression distance (NCD) is a parameter-free, feature-free, alignment-free, similarity metric based on compression. We propose an NCD of multisets that is also metric. Previously, attempts to obtain such an NCD failed. For classification purposes it is superior to the pairwise NCD in accuracy and implementation complexity. We cover the entire trajectory from theoretical underpinning to feasible practice. It is applied to biological (stem cell, organelle transport) and OCR classification questions that were earlier treated with the pairwise NCD. With the new method we achieved significantly better results. The theoretic foundation is Kolmogorov complexity.",
"Information distance is a parameter-free similarity measure based on compression, used in pattern recognition, data mining, phylogeny, clustering and classification. The notion of information distance is extended from pairs to multiples (finite lists). We study maximal overlap, metricity, universality, minimal overlap, additivity and normalized information distance in multiples. We use the theoretical notion of Kolmogorov complexity which for practical purposes is approximated by the length of the compressed version of the file involved, using a real-world compression program."
]
}
|
1308.3161
|
2949354293
|
Max-min fairness (MMF) is a widely known approach to a fair allocation of bandwidth to each of the users in a network. This allocation can be computed by uniformly raising the bandwidths of all users without violating capacity constraints. We consider an extension of these allocations by raising the bandwidth with arbitrary and not necessarily uniform time-depending velocities (allocation rates). These allocations are used in a game-theoretic context for routing choices, which we formalize in progressive filling games (PFGs). We present a variety of results for equilibria in PFGs. We show that these games possess pure Nash and strong equilibria. While computation in general is NP-hard, there are polynomial-time algorithms for prominent classes of Max-Min-Fair Games (MMFG), including the case when all users have the same source-destination pair. We characterize prices of anarchy and stability for pure Nash and strong equilibria in PFGs and MMFGs when players have different or the same source-destination pairs. In addition, we show that when a designer can adjust allocation rates, it is possible to design games with optimal strong equilibria. Some initial results on polynomial-time algorithms in this direction are also derived.
|
Combined routing and congestion control has been studied by several works (cf. @cite_23 @cite_27 @cite_2 @cite_25 ). In all these works, the existence of an equilibrium is proved by showing that it corresponds to an optimal solution of an associated convex utility maximization problem. This, however, implies that every user possibly splits the flow among an exponential number of routes which might be critical for some applications. For instance, the standard TCP IP protocol suite uses single path routing, because splitting the demand comes with several practical complications, e.g., packets arriving out of order, packet jitter due to different path delays etc. This issue has been explicitly addressed by @cite_11 .
|
{
"cite_N": [
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_25",
"@cite_11"
],
"mid": [
"2075434761",
"2118442492",
"2109055312",
"",
"2113692632"
],
"abstract": [
"We propose two flow control algorithms for networks with multiple paths between each source-destination pair. Both are distributed algorithms over the network to maximize aggregate source utility. Algorithm 1 is a first order Lagrangian method applied to a modified objective function that has the same optimal solution as the original objective function but has a better convergence property. Algorithm 2 is based on the idea that, at optimality, only paths with the minimum price carry positive flows, and naturally decomposes the overall decision into flow control (determines total transmission rate based on minimum path price) and routing (determines how to split the flow among available paths). Both algorithms can be implemented as simply a source-based mechanism in which no link algorithm nor feedback is needed. We present numerical examples to illustrate their behavior.",
"We consider the problem of congestion-aware multi-path routing in the Internet. Currently, Internet routing protocols select only a single path between a source and a destination. However, due to many policy routing decisions, single-path routing may limit the achievable throughput. In this paper, we envision a scenario where multi-path routing is enabled in the Internet to take advantage of path diversity. Using minimal congestion feedback signals from the routers, we present a class of algorithms that can be implemented at the sources to stably and optimally split the flow between each source-destination pair. We then show that the connection-level throughput region of such multi-path routing congestion control algorithms can be larger than that of a single-path congestion control scheme.",
"In this paper we investigate the potential benefits of coordinated congestion control for multipath data transfers, and contrast with uncoordinated control. For static random path selections, we show the worst-case throughput performance of uncoordinated control behaves as if each user had but a single path (scaling like log(log(N)) log(N) where N is the system size, measured in number of resources). Whereas coordinated control gives a throughput allocation bounded away from zero, improving on both uncoordinated control and on the greedy-least loaded path selection of e.g. Mitzenmacher. We then allow users to change their set of routes and introduce the notion of a Nash equilibrium. We show that with RTT bias (as in TCP Reno), uncoordinated control can lead to inefficient equilibria. With no RTT bias, both uncoordinated or coordinated Nash equilibria correspond to desirable welfare maximising states. Moreover, simple path reselection polices that shift to paths with higher net benefit can find these states.",
"",
"The authors consider a communication network shared by several selfish users. Each user seeks to optimize its own performance by controlling the routing of its given flow demand, giving rise to a noncooperative game. They investigate the Nash equilibrium of such systems. For a two-node multiple links system, uniqueness of the Nash equilibrium is proven under reasonable convexity conditions. It is shown that this Nash equilibrium point possesses interesting monotonicity properties. For general networks, these convexity conditions are not sufficient for guaranteeing uniqueness, and a counterexample is presented. Nonetheless, uniqueness of the Nash equilibrium for general topologies is established under various assumptions. >"
]
}
|
1308.2979
|
1485719498
|
Active replication is commonly built on top of the atomic broadcast primitive. Passive replication, which has been recently used in the popular ZooKeeper coordination system, can be naturally built on top of the primary-order atomic broadcast primitive. Passive replication differs from active replication in that it requires processes to cross a barrier before they become primaries and start broadcasting messages. In this paper, we propose a barrier function tau that explains and encapsulates the differences between existing primary-order atomic broadcast algorithms, namely semi-passive replication and Zookeeper atomic broadcast (Zab), as well as the differences between Paxos and Zab. We also show that implementing primary-order atomic broadcast on top of a generic consensus primitive and tau inherently results in higher time complexity than atomic broadcast, as witnessed by existing algorithms. We overcome this problem by presenting an alternative, primary-order atomic broadcast implementation that builds on top of a generic consensus primitive and uses consensus itself to form a barrier. This algorithm is modular and matches the time complexity of existing tau-based algorithms.
|
Pronto is an algorithm for database replication that shares several design choices with our @math -free algorithm and has the same time complexity in stable periods @cite_9 . Both algorithms elect a primary using an unreliable failure detector and have a similar notion of epochs, which are associated to a single primary. Epoch changes are determined using an agreement protocol, and values from old epochs that are agreed upon after a new epoch has been agreed upon are ignored. Pronto, however, is an active replication protocol: all replicas execute transactions, and non-determinism is handled by agreeing on a per-transaction log of non-deterministic choices that are application specific. Our work focuses on passive replication algorithms, their difference with active replication protocols, and on the notion of barriers in their implementation.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2102203034"
],
"abstract": [
"Enterprise applications typically store their state in databases. If a database fails, the application is unavailable while the database recovers. Database recovery is time consuming because it involves replaying the persistent transaction log. To isolate end users from database failures, we introduce Pronto, a protocol to orchestrate the transaction processing by multiple, standard databases so that they collectively implement the illusion of a single, highly available database. The key challenge in implementing this illusion is to enable fast failover from one database to another so that database failures do not interrupt the transaction processing. We solve this problem with a novel replication protocol that handles non-determinism without relying on perfect failure detection."
]
}
|
1308.2473
|
2950999367
|
This paper presents a distributed O(1)-approximation algorithm, with expected- @math running time, in the @math model for the metric facility location problem on a size- @math clique network. Though metric facility location has been considered by a number of researchers in low-diameter settings, this is the first sub-logarithmic-round algorithm for the problem that yields an O(1)-approximation in the setting of non-uniform facility opening costs. In order to obtain this result, our paper makes three main technical contributions. First, we show a new lower bound for metric facility location, extending the lower bound of B a (ICALP 2005) that applies only to the special case of uniform facility opening costs. Next, we demonstrate a reduction of the distributed metric facility location problem to the problem of computing an O(1)-ruling set of an appropriate spanning subgraph. Finally, we present a sub-logarithmic-round (in expectation) algorithm for computing a 2-ruling set in a spanning subgraph of a clique. Our algorithm accomplishes this by using a combination of randomized and deterministic sparsification.
|
Several more recent papers have continued the development of super-fast'' algorithms in low-diameter settings. In STOC 2011, Lenzen and Wattenhofer @cite_16 derived tight bounds on parallel load balancing and their result has applications in how information can be quickly disseminated in a clique (in the @math model). In PODC 2011, Patt-Shamir and Teplitsky @cite_20 presented on @math randomized algorithm for the problem. Third, Lenzen @cite_2 showed that randomization is not necessary for solving problems such as distributed sorting efficiently. Lenzen presented , constant-round algorithms for a routing problem and for the distributed sorting problem considered in @cite_20 . Constant-round algorithms for sophisticated problems, of the kind described by Lenzen @cite_2 , highlight the difficulty of showing non-trivial lower bounds in the @math model for clique networks. For example, it has been proved that computing an MST in general requires @math rounds for diameter- @math graphs @cite_3 , but no non-trivial lower bounds are known for diameter- @math or clique (diameter- @math ) networks.
|
{
"cite_N": [
"@cite_16",
"@cite_3",
"@cite_20",
"@cite_2"
],
"mid": [
"1980744864",
"2032607682",
"2056695987",
"2949818233"
],
"abstract": [
"We explore the fundamental limits of distributed balls-into-bins algorithms, i.e., algorithms where balls act in parallel, as separate agents. This problem was introduced by , who showed that non-adaptive and symmetric algorithms cannot reliably perform better than a maximum bin load of Theta(log log n log log log n) within the same number of rounds. We present an adaptive symmetric algorithm that achieves a bin load of two in log* n+O(1) communication rounds using O(n) messages in total. Moreover, larger bin loads can be traded in for smaller time complexities. We prove a matching lower bound of (1-o(1))log* n on the time complexity of symmetric algorithms that guarantee small bin loads at an asymptotically optimal message complexity of O(n). The essential preconditions of the proof are (i) a limit of O(n) on the total number of messages sent by the algorithm and (ii) anonymity of bins, i.e., the port numberings of balls are not globally consistent. In order to show that our technique yields indeed tight bounds, we provide for each assumption an algorithm violating it, in turn achieving a constant maximum bin load in constant time. As an application, we consider the following problem. Given a fully connected graph of n nodes, where each node needs to send and receive up to n messages, and in each round each node may send one message over each link, deliver all messages as quickly as possible to their destinations. We give a simple and robust algorithm of time complexity O(log* n) for this task and provide a generalization to the case where all nodes initially hold arbitrary sets of messages. Completing the picture, we give a less practical, but asymptotically optimal algorithm terminating within O(1) rounds. All these bounds hold with high probability.",
"This paper considers the problem of distributively constructing a minimum-weight spanning tree (MST) for graphs of constant diameter in the bounded-messages model, where each message can contain at most B bits for some parameter B. It is shown that the number of communication rounds necessary to compute an MST for graphs of diameter 4 or 3 can be as high as ( ( [3]n B ) ) and ( ( [4]n B ) ), respectively. The asymptotic lower bounds hold for randomized algorithms as well. On the other hand, we observe that O(log n) communication rounds always suffice to compute an MST deterministically for graphs with diameter 2, when B = O(log n). These results complement a previously known lower bound of ( ( [2]n B) ) for graphs of diameter Ω(log n).",
"We consider the model of fully connected networks, where in each round each node can send an O(log n)-bit message to each other node (this is the CONGEST model with diameter 1). It is known that in this model, min-weight spanning trees can be found in O(log log n) rounds. In this paper we show that distributed sorting, where each node has at most n items, can be done in time O(log log n) as well. It is also shown that selection can be done in O(1) time. (Using a concurrent result by Lenzen and Wattenhofer, the complexity of sorting is further reduced to constant.) Our algorithms are randomized, and the stated complexity bounds hold with high probability.",
"Consider a clique of n nodes, where in each synchronous round each pair of nodes can exchange O(log n) bits. We provide deterministic constant-time solutions for two problems in this model. The first is a routing problem where each node is source and destination of n messages of size O(log n). The second is a sorting problem where each node i is given n keys of size O(log n) and needs to receive the ith batch of n keys according to the global order of the keys. The latter result also implies deterministic constant-round solutions for related problems such as selection or determining modes."
]
}
|
1308.1195
|
2259305459
|
Abstract We consider multichannel deconvolution in a periodic setting with long-memory errors under three different scenarios for the convolution operators, i.e., super-smooth, regular-smooth and box-car convolutions. We investigate global performances of linear and hard-thresholded non-linear wavelet estimators for functions over a wide range of Besov spaces and for a variety of loss functions defining the risk. In particular, we obtain upper bounds on convergence rates using the L p -risk ( 1 ≤ p ∞ ) . Contrary to the case where the errors follow independent Brownian motions, it is demonstrated that multichannel deconvolution with errors that follow independent fractional Brownian motions with different Hurst parameters results in a much more involved situation. An extensive finite-sample numerical study is performed to supplement the theoretical findings.
|
The case where @math and @math in -- refers to the so-called standard deconvolution model which attracted attention of a number of researchers. (Note that the standard deconvolution model is typically ill-posed in the sense of Hadamard: the inversion does not depend continuously on the observed data, i.e., small noise in the convolved signal leads to a significant error in the estimation procedure.) After a rather rapid progress in this problem in late eighties--early nineties, authors turned to adaptive wavelet solutions of the problem that are optimal (in the minimax or the maxiset sense), or near-optimal within a logarithmic factor, in a wide range of Besov balls and for a variety of loss functions defining the risk, and under mild conditions on the blurring function (see, e.g., @cite_7 @cite_6 @cite_26 @cite_2 @cite_11 @cite_21 @cite_29 @cite_8 ).
|
{
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_8",
"@cite_29",
"@cite_21",
"@cite_6",
"@cite_2",
"@cite_11"
],
"mid": [
"2034584585",
"2092543127",
"2127184781",
"",
"",
"2071723133",
"2148688551",
"2033424286"
],
"abstract": [
"Thresholding algorithms in an orthonormal basis are studied to estimate noisy discrete signals degraded by a linear operator whose inverse is not bounded. For signals in a set Θ), sufficient conditions are established on the basis to obtain a maximum risk with minimax rates of convergence. Deconvolutions with kernels having a Fourier transform which vanishes at high frequencies are examples of unstable inverse problems, where a thresholding in a wavelet basis is a suboptimal estimator. A new mirror wavelet basis is constructed to obtain a deconvolution risk which is proved to be asymptotically equivalent to the minimax risk over bounded variation signals. This thresholding estimator is used to restore blurred satellite images.",
"We describe the wavelet–vaguelette decomposition (WVD) of a linear inverse problem. It is a substitute for the singular value decomposition (SVD) of an inverse problem, and it exists for a class of special inverse problems of homogeneous type—such as numerical differentiation, inversion of Abel-type transforms, certain convolution transforms, and the Radon transform. We propose to solve ill-posed linear inverse problems by nonlinearly \"shrinking\" the WVD coefficients of the noisy, indirect data. Our approach offers significant advantages over traditional SVD inversion in recovering spatially inhomogeneous objects. We suppose that observations are contaminated by white noise and that the object is an unknown element of a Besov space. We prove that nonlinear WVD shrinkage can be tuned to attain the minimax rate of convergence, for L2 loss, over the entire scale of Besov spaces. The important case of Besov spaces Bσp,q, p < 2, which model spatial inhomogeneity, is included. In comparison, linear procedures— SVD included—cannot attain optimal rates of convergence over such classes in the case p < 2. For example, our methods achieve faster rates of convergence for objects known to lie in the bump algebra or in bounded variation than any linear procedure.",
"We consider the non-parametric estimation of a function that is observed in white noise after convolution with a boxcar, the indicator of an interval ( a; a). In a recent paper Johnstone, Kerkyacharian, Picard and Raimondo (2004) have developed a wavelet deconvolution method (called WaveD) that can be used for \" boxcar kernels. For example, WaveD can be tuned to achieve near optimal rates over Besov spaces when a is a Badly Approximable (BA) irrational number. While the set of all BA's contains quadratic irrationals, e.g., a = p 5, it has Lebesgue measure zero. In this paper we derive two tuning scenarios of WaveD that are valid for all\" boxcar convolutions (i.e., when a 2 A where A is a full Lebesgue measure set). We propose (i) a tuning inspired from Minimax theory over Besov spaces; (ii) a tuning derived from Maxiset theory providing similar rates as for WaveD in the BA widths setting. Asymptotic theory nds that (i) in the worst case scenario, departures from the BA assumption aect WaveD convergence rates, at most, by log factors; (ii) the Maxiset tuning, which yields smaller thresholds, is superior to the Minimax tuning over a whole range of Besov sub-scales. Our asymptotic results are illustrated in an extensive simulation of boxcar convolution observed in white noise.",
"",
"",
"SUMMARY A wide variety of scientific settings involve indirect noisy measurements where one faces a linear inverse problem in the presence of noise. Primary interest is in some function f(t) but data are accessible only about some linear transform corrupted by noise. The usual linear methods for such inverse problems do not perform satisfactorily when f(t) is spatially inhomogeneous. One existing nonlinear alternative is the wavelet-vaguelette decomposition method, based on the expansion of the unknown f(t) in wavelet series. In the vaguelette-wavelet decomposition method proposed here, the observed data are expanded directly in wavelet series. The performances of various methods are compared through exact risk calculations, in the context of the estimation of the derivative of a function observed subject to noise. A result is proved demonstrating that, with a suitable universal threshold somewhat larger than that used for standard denoising problems, both the wavelet-based approaches have an ideal spatial adaptivity property.",
"Deconvolution problems are naturally represented in the Fourier domain, whereas thresholding in wavelet bases is known to have broad adaptivity properties. We study a method which combines both fast Fourier and fast wavelet transforms and can recover a blurred function observed in white noise with \"O\"l\"n\" log (\"n\")-super-2r steps. In the periodic setting, the method applies to most deconvolution problems, including certain 'boxcar' kernels, which are important as a model of motion blur, but having poor Fourier characteristics. Asymptotic theory informs the choice of tuning parameters and yields adaptivity properties for the method over a wide class of measures of error and classes of function. The method is tested on simulated light detection and ranging data suggested by underwater remote sensing. Both visual and numerical results show an improvement over competing approaches. Finally, the theory behind our estimation paradigm gives a complete characterization of the 'maxiset' of the method: the set of functions where the method attains a near optimal rate of convergence for a variety of \"L\"-super-\"p\" loss functions. Copyright 2004 Royal Statistical Society.",
"Deconvolution of a noisy signal in a periodic band-limited wavelet basis exhibits visual artifacts in the neighbourhood of discontinuities. This phenomenon is similar to that appearing in denoising with compactly-supported wavelet transforms and can be reduced by \"cycle spinning\" as in Coifman and Donoho [3]. In this paper we present an algorithm which \"cycle-spins\" a periodic band-limited wavelet estimator over all circulant shifts in O(n(log(n))2) steps. Our approach is based on a mathematical idea and takes full advantage of the Fast Fourier Transform. A particular feature of our algorithm is to bounce from the Fourier domain (where deconvolution is performed) to the wavelet domain (where denoising is performed). For both smooth and boxcar convolutions observed in white noise, we illustrate the visual and numerical performances of our algorithm in an extensive simulation study of the estimator recently proposed by Johnstone, Kerkyacharian, Picard, and Raimondo [8]. All figures presented here are reproducible using the software package."
]
}
|
1308.1195
|
2259305459
|
Abstract We consider multichannel deconvolution in a periodic setting with long-memory errors under three different scenarios for the convolution operators, i.e., super-smooth, regular-smooth and box-car convolutions. We investigate global performances of linear and hard-thresholded non-linear wavelet estimators for functions over a wide range of Besov spaces and for a variety of loss functions defining the risk. In particular, we obtain upper bounds on convergence rates using the L p -risk ( 1 ≤ p ∞ ) . Contrary to the case where the errors follow independent Brownian motions, it is demonstrated that multichannel deconvolution with errors that follow independent fractional Brownian motions with different Hurst parameters results in a much more involved situation. An extensive finite-sample numerical study is performed to supplement the theoretical findings.
|
The case @math and @math (i.e., standard deconvolution with LRD errors) has been investigated in @cite_17 @cite_4 @cite_14 and @cite_22 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_17"
],
"mid": [
"2182456552",
"2088633674",
"2064658852",
"2047838040"
],
"abstract": [
"We investigate global performances of non-linear wavelet estimation in regression models with correlated errors. Convergence properties are studied over a wide range of Besov classesB s ;r and for a variety ofL p error measures. We consider error distributions with Long-Range-Dependence parameter ; 0 2 + . Using a vaguelette decomposition of fractional Gaussian noise we",
"Abstract In this paper, we model linear inverse problems with long-range dependence by a fractional Gaussian noise model and study function estimation based on observations from the model. By using two wavelet-vaguelette decompositions, one for the inverse problem which simultaneously quasi-diagonalizes both the operator and the prior information, and one for long-range dependence which decorrelates fractional Gaussian noise, we establish asymptotics for minimax risks, and show that the wavelet shrinkage estimate can be tuned to achieve the minimax convergence rate and significantly outperform linear estimates.",
"Abstract In this paper, a hard thresholding wavelet estimator is constructed for a deconvolution model in a periodic setting that has long-range dependent noise. The estimation paradigm is based on a maxiset method that attains a near optimal rate of convergence for a variety of L p loss functions and a wide variety of Besov spaces in the presence of strong dependence. The effect of long-range dependence is detrimental to the rate of convergence. The method is implemented using a modification of the WaveD -package in R and an extensive numerical study is conducted. The numerical study supplements the theoretical results and compares the LRD estimator with a naive application of the standard WaveD approach.",
"In this article we study function estimation via wavelet shrinkage for data with long-range dependence. We propose a fractional Gaussian noise model to approximate nonparametric regression with long-range dependence and establish asymptotics for minimax risks. Because of long-range dependence, the minimax risk and the minimax linear risk converge to 0 at rates that differ from those for data with independence or short-range dependence. Wavelet estimates with best selection of resolution leveldependent threshold achieve minimax rates over a wide range of spaces. Cross-validation for dependent data is proposed to select the optimal threshold. The wavelet estimates significantly outperform linear estimates. The key to proving the asymptotic results is a wavelet]vaguelette decomposition which decorrelates fractional Gaussian noise. Such wavelet]vaguelette decomposition is also very useful in fractal signal processing."
]
}
|
1308.1195
|
2259305459
|
Abstract We consider multichannel deconvolution in a periodic setting with long-memory errors under three different scenarios for the convolution operators, i.e., super-smooth, regular-smooth and box-car convolutions. We investigate global performances of linear and hard-thresholded non-linear wavelet estimators for functions over a wide range of Besov spaces and for a variety of loss functions defining the risk. In particular, we obtain upper bounds on convergence rates using the L p -risk ( 1 ≤ p ∞ ) . Contrary to the case where the errors follow independent Brownian motions, it is demonstrated that multichannel deconvolution with errors that follow independent fractional Brownian motions with different Hurst parameters results in a much more involved situation. An extensive finite-sample numerical study is performed to supplement the theoretical findings.
|
The case where @math for each @math ; (i.e., the case where in the multichannel deconvolution model the errors follow independent standard Brownian motions) was first considered in @cite_16 (extending the results obtained in @cite_2 for the case @math ).
|
{
"cite_N": [
"@cite_16",
"@cite_2"
],
"mid": [
"1998219449",
"2148688551"
],
"abstract": [
"The paper proposes a method of deconvolution in a periodic setting which combines two important ideas, the fast wavelet and Fourier transform-based estimation procedure of Johnstone \"et al\". [\"J. Roy. Statist. Soc. Ser. B\" 66 (2004) 547] and the multichannel system technique proposed by Casey and Walnut [ \"SIAM Rev\" . 36 (1994) 537]. An unknown function is estimated by a wavelet series where the empirical wavelet coefficients are filtered in an adapting non-linear fashion. It is shown theoretically that the estimator achieves optimal convergence rate in a wide range of Besov spaces. The procedure allows to reduce the ill-posedness of the problem especially in the case of non-smooth blurring functions such as boxcar functions: it is proved that additions of extra channels improve convergence rate of the estimator. Theoretical study is supplemented by an extensive set of small-sample simulation experiments demonstrating high-quality performance of the proposed method. Copyright 2006 Board of the Foundation of the Scandinavian Journal of Statistics..",
"Deconvolution problems are naturally represented in the Fourier domain, whereas thresholding in wavelet bases is known to have broad adaptivity properties. We study a method which combines both fast Fourier and fast wavelet transforms and can recover a blurred function observed in white noise with \"O\"l\"n\" log (\"n\")-super-2r steps. In the periodic setting, the method applies to most deconvolution problems, including certain 'boxcar' kernels, which are important as a model of motion blur, but having poor Fourier characteristics. Asymptotic theory informs the choice of tuning parameters and yields adaptivity properties for the method over a wide class of measures of error and classes of function. The method is tested on simulated light detection and ranging data suggested by underwater remote sensing. Both visual and numerical results show an improvement over competing approaches. Finally, the theory behind our estimation paradigm gives a complete characterization of the 'maxiset' of the method: the set of functions where the method attains a near optimal rate of convergence for a variety of \"L\"-super-\"p\" loss functions. Copyright 2004 Royal Statistical Society."
]
}
|
1308.1195
|
2259305459
|
Abstract We consider multichannel deconvolution in a periodic setting with long-memory errors under three different scenarios for the convolution operators, i.e., super-smooth, regular-smooth and box-car convolutions. We investigate global performances of linear and hard-thresholded non-linear wavelet estimators for functions over a wide range of Besov spaces and for a variety of loss functions defining the risk. In particular, we obtain upper bounds on convergence rates using the L p -risk ( 1 ≤ p ∞ ) . Contrary to the case where the errors follow independent Brownian motions, it is demonstrated that multichannel deconvolution with errors that follow independent fractional Brownian motions with different Hurst parameters results in a much more involved situation. An extensive finite-sample numerical study is performed to supplement the theoretical findings.
|
The case of the multichannel deconvolution with errors following LRD sequences was investigated in @cite_0 using the minimax approach, extending results obtained in @cite_25 @cite_15 and @cite_19 .
|
{
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_15",
"@cite_25"
],
"mid": [
"2963393250",
"2011360117",
"2090107570",
"1966128225"
],
"abstract": [
"We consider the problem of estimating the unknown response function in the multichannel deconvolution model with long-range dependent Gaussian or sub-Gaussian errors. We do not limit our consideration to a specific type of long-range dependence rather we assume that the errors should satisfy a general assumption in terms of the smallest and largest eigenvalues of their covariance matrices. We derive minimax lower bounds for the quadratic risk in the proposed multichannel deconvolution model when the response function is assumed to belong to a Besov ball and the blurring function is assumed to possess some smoothness properties, including both regular-smooth and super-smooth convolutions. Furthermore, we propose an adaptive wavelet estimator of the response function that is asymptotically optimal (in the minimax sense), or nearoptimal (within a logarithmic factor), in a wide range of Besov balls, for both Gaussian and sub-Gaussian errors. It is shown that the optimal convergence rates depend on the balance between the smoothness parameter of the response function, the kernel parameters of the blurring function, the long memory parameters of the errors, and how the total number of observations is distributed among the total number of channels. Some examples of inverse problems in mathematical physics where one needs to recover initial or boundary conditions on the basis of observations from a noisy solution of a partial differential equation are used to illustrate the application of the theory we developed. The optimal convergence rates and the adaptive estimators we consider extend the ones studied by Pensky and Sapatinas (2009, 2010) for independent and identically distributed Gaussian errors to the case of long-range dependent Gaussian or sub-Gaussian errors.",
"We consider the problem of estimating the unknown response function in the multichannel deconvolution model with a boxcar-like kernel which is of particular interest in signal processing. It is known that, when the number of channels is finite, the precision of reconstruction of the response function increases as the number of channels @math grow (even when the total number of observations @math for all channels @math remains constant) and this requires that the parameter of the channels form a Badly Approximable @math -tuple. Recent advances in data collection and recording techniques made it of urgent interest to study the case when the number of channels @math grow with the total number of observations @math . However, in real-life situations, the number of channels @math usually refers to the number of physical devices and, consequently, may grow to infinity only at a slow rate as @math . When @math grows slowly as @math increases, we develop a procedure for the construction of a Badly Approximable @math -tuple on a specified interval, of a non-asymptotic length, together with a lower bound associated with this @math -tuple, which explicitly shows its dependence on @math as @math is growing. This result is further used for the evaluation of the @math -risk of the suggested adaptive wavelet thresholding estimator of the unknown response function and, furthermore, for the choice of the optimal number of channels @math which minimizes the @math -risk.",
"Using the asymptotical minimax framework, we examine convergence rates equivalency between a continuous functional deconvolution model and its real-life discrete counterpart, over a wide range of Besov balls and for theL 2 -risk. For this purpose, all possible models are divided into three groups. For the models in the",
"We extend deconvolution in a periodic setting to deal with functional data. The resulting functional deconvolution model can be viewed as a generalization of a multitude of inverse problems in mathematical physics where one needs to recover initial or boundary conditions on the basis of observations from a noisy solution of a partial differential equation. In the case when it is observed at a finite number of distinct points, the proposed functional deconvolution model can also be viewed as a multichannel deconvolution model. We derive minimax lower bounds for the L 2 -risk in the proposed functional deconvolution model when f(·) is assumed to belong to a Besov ball and the blurring function is assumed to possess some smoothness properties, including both regular-smooth and super-smooth convolutions. Furthermore, we propose an adaptive wavelet estimator of f(·) that is asymptotically optimal (in the minimax sense), or near-optimal within a logarithmic factor, in a wide range of Besov balls. In addition, we consider a discretization of the proposed functional deconvolution model and investigate when the availability of continuous data gives advantages over observations at the asymptotically large number of points. As an illustration, we discuss particular examples for both continuous and discrete settings."
]
}
|
1308.1195
|
2259305459
|
Abstract We consider multichannel deconvolution in a periodic setting with long-memory errors under three different scenarios for the convolution operators, i.e., super-smooth, regular-smooth and box-car convolutions. We investigate global performances of linear and hard-thresholded non-linear wavelet estimators for functions over a wide range of Besov spaces and for a variety of loss functions defining the risk. In particular, we obtain upper bounds on convergence rates using the L p -risk ( 1 ≤ p ∞ ) . Contrary to the case where the errors follow independent Brownian motions, it is demonstrated that multichannel deconvolution with errors that follow independent fractional Brownian motions with different Hurst parameters results in a much more involved situation. An extensive finite-sample numerical study is performed to supplement the theoretical findings.
|
The case of nonparametric density estimation for the errors-in-variables problem with LRD has been studied by @cite_31 . In particular, it was shown that LRD has no impact on the optimal convergence properties in the super-smooth scenario. We show similar results for the multichannel deconvolution model presented here.
|
{
"cite_N": [
"@cite_31"
],
"mid": [
"2040313019"
],
"abstract": [
"We consider the nonparametric estimation of the density func- tion of weakly and strongly dependent processes with noisy observations. We show that in the ordinary smooth case the optimal bandwidth choice can be influenced by long range dependence, as opposite to the standard case, when no noise is present. In particular, if the dependence is moder- ate the bandwidth, the rates of mean-square convergence and, additionally, central limit theorem are the same as in the i.i.d. case. If the dependence is strong enough, then the bandwidth choice is influenced by the strength of dependence, which is dierent when compared to the non-noisy case. Also,"
]
}
|
1308.1195
|
2259305459
|
Abstract We consider multichannel deconvolution in a periodic setting with long-memory errors under three different scenarios for the convolution operators, i.e., super-smooth, regular-smooth and box-car convolutions. We investigate global performances of linear and hard-thresholded non-linear wavelet estimators for functions over a wide range of Besov spaces and for a variety of loss functions defining the risk. In particular, we obtain upper bounds on convergence rates using the L p -risk ( 1 ≤ p ∞ ) . Contrary to the case where the errors follow independent Brownian motions, it is demonstrated that multichannel deconvolution with errors that follow independent fractional Brownian motions with different Hurst parameters results in a much more involved situation. An extensive finite-sample numerical study is performed to supplement the theoretical findings.
|
Finally, for more information regarding the LIDAR device, the reader is referred to, e.g., @cite_18 and @cite_3 .
|
{
"cite_N": [
"@cite_18",
"@cite_3"
],
"mid": [
"2056627465",
"24480457"
],
"abstract": [
"A deconvolution technique for deriving more resolved signals from lidar signals with typical CO2 laser pulses is proposed, utilizing special matrices constructed from the temporal profile of laser pulses. It is shown that near-range signals can be corrected and small-scale variations of backscattered signals can be retrieved with this technique. Deconvolution errors as a result of noise in lidar data and in the laser pulse profile are also investigated numerically by computer simulation.",
"A main application of lidar remote sensing is to provide spatial resolved data. Based on the fundamental relationship between space and time the distance can be calculated from the photons’ time of flight. Accordingly, the distance resolution is limited by the time resolution of the lidar detector. Furthermore, if the system response function of the lidar is longer than the time resolution interval of the detector, the measured lidar signal is smeared, and the effective distance resolution decreases. In theory, this loss of resolution can be corrected by deconvolution of the measured signal with the system response function. Measured lidar signals are superposed by noise which makes a direct deconvolution impossible because of the effect of noise amplification. In this paper, a technique is presented which allows for a stable deconvolution of lidar signal returns without any filtering in the frequency domain. It is based on the Richardson-Lucy algorithm for image reconstruction. Simulations of short distance lidar signals have been used to compare the method with conventional deconvolution algorithms such as the Fourier transformation."
]
}
|
1308.0625
|
2952929253
|
In this work, we explore the performance of backpressure routing and scheduling for TCP flows over wireless networks. TCP and backpressure are not compatible due to a mismatch between the congestion control mechanism of TCP and the queue size based routing and scheduling of the backpressure framework. We propose a TCP-aware backpressure routing and scheduling that takes into account the behavior of TCP flows. TCP-aware backpressure (i) provides throughput optimality guarantees in the Lyapunov optimization framework, (ii) gracefully combines TCP and backpressure without making any changes to the TCP protocol, (iii) improves the throughput of TCP flows significantly, and (iv) provides fairness across competing TCP flows.
|
Backpressure, a routing and scheduling framework over communication networks @cite_23 , @cite_24 has generated a lot of research interest @cite_2 , mainly in wireless ad-hoc networks. It has also been shown that backpressure can be combined with flow control to provide utility-optimal operation guarantee @cite_14 , @cite_28 .
|
{
"cite_N": [
"@cite_14",
"@cite_28",
"@cite_24",
"@cite_23",
"@cite_2"
],
"mid": [
"2120344179",
"",
"2135663206",
"2105177639",
"2137152139"
],
"abstract": [
"We consider optimal control for general networks with both wireless and wireline components and time varying channels. A dynamic strategy is developed to support all traffic whenever possible, and to make optimally fair decisions about which data to serve when inputs exceed network capacity. The strategy is decoupled into separate algorithms for flow control, routing, and resource allocation, and allows each user to make decisions independent of the actions of others. The combined strategy is shown to yield data rates that are arbitrarily close to the optimal operating point achieved when all network controllers are coordinated and have perfect knowledge of future events. The cost of approaching this fair operating point is an end-to-end delay increase for data that is served by the network.",
"",
"Consider N parallel queues competing for the attention of a single server. At each time slot each queue may be connected to the server or not depending on the value of a binary random variable, the connectivity variable. Allocation at each slot; is based on the connectivity information and on the lengths of the connected queues only. At the end of each slot, service may be completed with a given fixed probability. Such a queueing model is appropriate for some communication networks with changing topology. In the case of infinite buffers, necessary and sufficient conditions are obtained for stabilizability of the system in terms of the different system parameters. The allocation policy that serves the longest connected queue stabilizes the system when the stabilizability conditions hold. The same policy minimizes the delay for the special case of symmetric queues. In a system with a single buffer per queue, an allocation policy is obtained that maximizes the throughput and minimizes the delay when the arrival and service statistics of different queues are identical. >",
"The stability of a queueing network with interdependent servers is considered. The dependency among the servers is described by the definition of their subsets that can be activated simultaneously. Multihop radio networks provide a motivation for the consideration of this system. The problem of scheduling the server activation under the constraints imposed by the dependency among servers is studied. The performance criterion of a scheduling policy is its throughput that is characterized by its stability region, that is, the set of vectors of arrival and service rates for which the system is stable. A policy is obtained which is optimal in the sense that its stability region is a superset of the stability region of every other scheduling policy, and this stability region is characterized. The behavior of the network is studied for arrival rates that lie outside the stability region. Implications of the results in certain types of concurrent database and parallel processing systems are discussed. >",
"This text presents a modern theory of analysis, control, and optimization for dynamic networks. Mathematical techniques of Lyapunov drift and Lyapunov optimization are developed and shown to enable constrained optimization of time averages in general stochastic systems. The focus is on communication and queueing systems, including wireless networks with time-varying channels, mobility, and randomly arriving traffic. A simple drift-plus-penalty framework is used to optimize time averages such as throughput, throughput-utility, power, and distortion. Explicit performance-delay tradeoffs are provided to illustrate the cost of approaching optimality. This theory is also applicable to problems in operations research and economics, where energy-efficient and profit-maximizing decisions must be made without knowing the future. Topics in the text include the following: - Queue stability theory - Backpressure, max-weight, and virtual queue methods - Primal-dual methods for non-convex stochastic utility maximization - Universal scheduling theory for arbitrary sample paths - Approximate and randomized scheduling theory - Optimization of renewal systems and Markov decision systems Detailed examples and numerous problem set questions are provided to reinforce the main concepts. Table of Contents: Introduction Introduction to Queues Dynamic Scheduling Example Optimizing Time Averages Optimizing Functions of Time Averages Approximate Scheduling Optimization of Renewal Systems Conclusions"
]
}
|
1308.1031
|
2069857956
|
The ability to process large numbers of continuous data streams in a near-real-time fashion has become a crucial prerequisite for many scientific and industrial use cases in recent years. While the individual data streams are usually trivial to process, their aggregated data volumes easily exceed the scalability of traditional stream processing systems. At the same time, massively-parallel data processing systems like MapReduce or Dryad currently enjoy a tremendous popularity for data-intensive applications and have proven to scale to large numbers of nodes. Many of these systems also provide streaming capabilities. However, unlike traditional stream processors, these systems have disregarded QoS requirements of prospective stream processing applications so far. In this paper we address this gap. First, we analyze common design principles of today's parallel data processing frameworks and identify those principles that provide degrees of freedom in trading off the QoS goals latency and throughput. Second, we propose a highly distributed scheme which allows these frameworks to detect violations of user-defined QoS constraints and optimize the job execution without manual interaction. As a proof of concept, we implemented our approach for our massively-parallel data processing framework Nephele and evaluated its effectiveness through a comparison with Hadoop Online. For an example streaming application from the multimedia domain running on a cluster of 200 nodes, our approach improves the processing latency by a factor of at least 13 while preserving high data throughput when needed.
|
Initially, several centralized systems for stream processing have been proposed, such as Aurora @cite_12 and STREAM @cite_13 @cite_11 . Aurora is a DBMS for continuous queries that are constructed by connecting a set of predefined operators to a DAG . The stream processing engine schedules the execution of the operators and uses load shedding, i.e. dropping intermediate tuples to meet QoS goals. At the end points of the graph, user-defined QoS functions are used to specify the desired latency and which tuples can be dropped. STREAM presents additional strategies for applying load-shedding, such as probabilistic exclusion of tuples. While these systems have useful properties such as respecting latency requirements, they run on a single host and do not scale well with rising data rates and numbers of data sources.
|
{
"cite_N": [
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2132520482",
"2149576945",
"1646696449"
],
"abstract": [
"In many recent applications, data may take the form of continuous data streams, rather than finite stored data sets. Several aspects of data management need to be reconsidered in the presence of data streams, offering a new research direction for the database community. In this paper we focus primarily on the problem of query processing, specifically on how to define and evaluate continuous queries over data streams. We address semantic issues as well as efficiency concerns. Our main contributions are threefold. First, we specify a general and flexible architecture for query processing in the presence of data streams. Second, we use our basic architecture as a tool to clarify alternative semantics and processing techniques for continuous queries. The architecture also captures most previous work on continuous queries and data streams, as well as related concepts such as triggers and materialized views. Finally, we map out research topics in the area of query processing over data streams, showing where previous work is relevant and describing problems yet to be addressed.",
"Abstract.This paper describes the basic processing model and architecture of Aurora, a new system to manage data streams for monitoring applications. Monitoring applications differ substantially from conventional business data processing. The fact that a software system must process and react to continual inputs from many sources (e.g., sensors) rather than from human operators requires one to rethink the fundamental architecture of a DBMS for this application area. In this paper, we present Aurora, a new DBMS currently under construction at Brandeis University, Brown University, and M.I.T. We first provide an overview of the basic Aurora model and architecture and then describe in detail a stream-oriented set of operators.",
""
]
}
|
1308.1031
|
2069857956
|
The ability to process large numbers of continuous data streams in a near-real-time fashion has become a crucial prerequisite for many scientific and industrial use cases in recent years. While the individual data streams are usually trivial to process, their aggregated data volumes easily exceed the scalability of traditional stream processing systems. At the same time, massively-parallel data processing systems like MapReduce or Dryad currently enjoy a tremendous popularity for data-intensive applications and have proven to scale to large numbers of nodes. Many of these systems also provide streaming capabilities. However, unlike traditional stream processors, these systems have disregarded QoS requirements of prospective stream processing applications so far. In this paper we address this gap. First, we analyze common design principles of today's parallel data processing frameworks and identify those principles that provide degrees of freedom in trading off the QoS goals latency and throughput. Second, we propose a highly distributed scheme which allows these frameworks to detect violations of user-defined QoS constraints and optimize the job execution without manual interaction. As a proof of concept, we implemented our approach for our massively-parallel data processing framework Nephele and evaluated its effectiveness through a comparison with Hadoop Online. For an example streaming application from the multimedia domain running on a cluster of 200 nodes, our approach improves the processing latency by a factor of at least 13 while preserving high data throughput when needed.
|
Later systems such as Aurora* Medusa @cite_15 support distributed processing of data streams. An Aurora* system is a set of Aurora nodes that cooperate via an overlay network within the same administrative domain. In Aurora* the nodes can freely relocate load by decentralized, pairwise exchange of Aurora stream operators. Medusa integrates many participants such as several sites running Aurora* systems from different administrative domains into a single federated system. Borealis @cite_5 extends Aurora* Medusa and introduces, amongst other features, a refined QoS optimization model where the effects of load shedding on QoS can be computed at every point in the data flow. This enables the optimizer to find better strategies for load shedding.
|
{
"cite_N": [
"@cite_5",
"@cite_15"
],
"mid": [
"2115503987",
"2164128653"
],
"abstract": [
"Borealis is a second-generation distributed stream processing engine that is being developed at Brandeis University, Brown University, and MIT. Borealis inherits core stream processing functionality from Aurora [14] and distribution functionality from Medusa [51]. Borealis modifies and extends both systems in non-trivial and critical ways to provide advanced capabilities that are commonly required by newly-emerging stream processing applications. In this paper, we outline the basic design and functionality of Borealis. Through sample real-world applications, we motivate the need for dynamically revising query results and modifying query specifications. We then describe how Borealis addresses these challenges through an innovative set of features, including revision records, time travel, and control lines. Finally, we present a highly flexible and scalable QoS-based optimization model that operates across server and sensor networks and a new fault-tolerance model with flexible consistency-availability trade-offs.",
"Stream processing fits a large class of new applications for which conventional DBMSs fall short. Because many stream-oriented systems are inherently geographically distributed and because distribution offers scalable load management and higher availability, future stream processing systems will operate in a distributed fashion. They will run across the Internet on computers typically owned by multiple cooperating administrative domains. This paper describes the architectural challenges facing the design of large-scale distributed stream processing systems, and discusses novel approaches for addressing load management, high availability, and federated operation issues. We describe two stream processing systems, Aurora* and Medusa, which are being designed to explore complementary solutions to these challenges. This paper discusses the architectural issues facing the design of large-scale distributed stream processing systems. We begin in Section 2 with a brief description of our centralized stream processing system, Aurora [4]. We then discuss two complementary efforts to extend Aurora to a distributed environment: Aurora* and Medusa. Aurora* assumes an environment in which all nodes fall under a single administrative domain. Medusa provides the infrastructure to support federated operation of nodes across administrative boundaries. After describing the architectures of these two systems in Section 3, we consider three design challenges common to both: infrastructures and protocols supporting communication amongst nodes (Section 4), load sharing in response to variable network conditions (Section 5), and high availability in the presence of failures (Section 6). We also discuss high-level policy specifications employed by the two systems in Section 7. For all of these issues, we believe that the push-based nature of stream-based applications not only raises new challenges but also offers the possibility of new domain-specific solutions."
]
}
|
1308.1031
|
2069857956
|
The ability to process large numbers of continuous data streams in a near-real-time fashion has become a crucial prerequisite for many scientific and industrial use cases in recent years. While the individual data streams are usually trivial to process, their aggregated data volumes easily exceed the scalability of traditional stream processing systems. At the same time, massively-parallel data processing systems like MapReduce or Dryad currently enjoy a tremendous popularity for data-intensive applications and have proven to scale to large numbers of nodes. Many of these systems also provide streaming capabilities. However, unlike traditional stream processors, these systems have disregarded QoS requirements of prospective stream processing applications so far. In this paper we address this gap. First, we analyze common design principles of today's parallel data processing frameworks and identify those principles that provide degrees of freedom in trading off the QoS goals latency and throughput. Second, we propose a highly distributed scheme which allows these frameworks to detect violations of user-defined QoS constraints and optimize the job execution without manual interaction. As a proof of concept, we implemented our approach for our massively-parallel data processing framework Nephele and evaluated its effectiveness through a comparison with Hadoop Online. For an example streaming application from the multimedia domain running on a cluster of 200 nodes, our approach improves the processing latency by a factor of at least 13 while preserving high data throughput when needed.
|
The third category of possible stream processing systems is constituted by massively-parallel data processing systems. In contrast to the previous two categories, these systems have been designed to run on hundreds or even thousands of nodes in the first place and to efficiently transfer large data volumes between them. Traditionally, those systems have been used to process finite blocks of data stored on distributed file systems. However, many of the newer systems like Dryad @cite_23 , Hyracks @cite_7 , CIEL @cite_21 , or our Nephele framework @cite_3 allow to assemble complex parallel data flow graphs and to construct pipelines between the individual parts of the flow. Therefore, these parallel data flow systems in general are also suitable for streaming applications.
|
{
"cite_N": [
"@cite_21",
"@cite_3",
"@cite_7",
"@cite_23"
],
"mid": [
"1487337216",
"2006140865",
"2083541728",
"2100830825"
],
"abstract": [
"This paper introduces CIEL, a universal execution engine for distributed data-flow programs. Like previous execution engines, CIEL masks the complexity of distributed programming. Unlike those systems, a CIEL job can make data-dependent control-flow decisions, which enables it to compute iterative and recursive algorithms. We have also developed Skywriting, a Turing-complete scripting language that runs directly on CIEL. The execution engine provides transparent fault tolerance and distribution to Skywriting scripts and high-performance code written in other programming languages. We have deployed CIEL on a cloud computing platform, and demonstrate that it achieves scalable performance for both iterative and non-iterative algorithms.",
"In recent years ad hoc parallel data processing has emerged to be one of the killer applications for Infrastructure-as-a-Service (IaaS) clouds. Major Cloud computing companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs. However, the processing frameworks which are currently used have been designed for static, homogeneous cluster setups and disregard the particular nature of a cloud. Consequently, the allocated compute resources may be inadequate for big parts of the submitted job and unnecessarily increase processing time and cost. In this paper, we discuss the opportunities and challenges for efficient parallel data processing in clouds and present our research project Nephele. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by today's IaaS clouds for both, task scheduling and execution. Particular tasks of a processing job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution. Based on this new framework, we perform extended evaluations of MapReduce-inspired processing jobs on an IaaS cloud system and compare the results to the popular data processing framework Hadoop.",
"Hyracks is a new partitioned-parallel software platform designed to run data-intensive computations on large shared-nothing clusters of computers. Hyracks allows users to express a computation as a DAG of data operators and connectors. Operators operate on partitions of input data and produce partitions of output data, while connectors repartition operators' outputs to make the newly produced partitions available at the consuming operators. We describe the Hyracks end user model, for authors of dataflow jobs, and the extension model for users who wish to augment Hyracks' built-in library with new operator and or connector types. We also describe our initial Hyracks implementation. Since Hyracks is in roughly the same space as the open source Hadoop platform, we compare Hyracks with Hadoop experimentally for several different kinds of use cases. The initial results demonstrate that Hyracks has significant promise as a next-generation platform for data-intensive applications.",
"Dryad is a general-purpose distributed execution engine for coarse-grain data-parallel applications. A Dryad application combines computational \"vertices\" with communication \"channels\" to form a dataflow graph. Dryad runs the application by executing the vertices of this graph on a set of available computers, communicating as appropriate through flies, TCP pipes, and shared-memory FIFOs. The vertices provided by the application developer are quite simple and are usually written as sequential programs with no thread creation or locking. Concurrency arises from Dryad scheduling vertices to run simultaneously on multiple computers, or on multiple CPU cores within a computer. The application can discover the size and placement of data at run time, and modify the graph as the computation progresses to make efficient use of the available resources. Dryad is designed to scale from powerful multi-core single computers, through small clusters of computers, to data centers with thousands of computers. The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices."
]
}
|
1308.1031
|
2069857956
|
The ability to process large numbers of continuous data streams in a near-real-time fashion has become a crucial prerequisite for many scientific and industrial use cases in recent years. While the individual data streams are usually trivial to process, their aggregated data volumes easily exceed the scalability of traditional stream processing systems. At the same time, massively-parallel data processing systems like MapReduce or Dryad currently enjoy a tremendous popularity for data-intensive applications and have proven to scale to large numbers of nodes. Many of these systems also provide streaming capabilities. However, unlike traditional stream processors, these systems have disregarded QoS requirements of prospective stream processing applications so far. In this paper we address this gap. First, we analyze common design principles of today's parallel data processing frameworks and identify those principles that provide degrees of freedom in trading off the QoS goals latency and throughput. Second, we propose a highly distributed scheme which allows these frameworks to detect violations of user-defined QoS constraints and optimize the job execution without manual interaction. As a proof of concept, we implemented our approach for our massively-parallel data processing framework Nephele and evaluated its effectiveness through a comparison with Hadoop Online. For an example streaming application from the multimedia domain running on a cluster of 200 nodes, our approach improves the processing latency by a factor of at least 13 while preserving high data throughput when needed.
|
The Muppet system @cite_26 also focuses on the parallel processing of continuous stream data while preserving a MapReduce-like programming abstraction. However, the authors decided to replace the reduce function by a more generic update function to allow for greater flexibility when processing intermediate data with identical keys. Muppet also aims to support near-real-time processing latencies. Unfortunately, the paper provides only few details on how data is actually passed between tasks (and hosts). We assume however that the system uses a communication scheme unlike the one we explained in principalFrameworkProperties .
|
{
"cite_N": [
"@cite_26"
],
"mid": [
"2024621423"
],
"abstract": [
"MapReduce has emerged as a popular method to process big data. In the past few years, however, not just big data, but fast data has also exploded in volume and availability. Examples of such data include sensor data streams, the Twitter Firehose, and Facebook updates. Numerous applications must process fast data. Can we provide a MapReduce-style framework so that developers can quickly write such applications and execute them over a cluster of machines, to achieve low latency and high scalability? In this paper we report on our investigation of this question, as carried out at Kosmix and WalmartLabs. We describe MapUpdate, a framework like MapReduce, but specifically developed for fast data. We describe Muppet, our implementation of MapUpdate. Throughout the description we highlight the key challenges, argue why MapReduce is not well suited to address them, and briefly describe our current solutions. Finally, we describe our experience and lessons learned with Muppet, which has been used extensively at Kosmix and WalmartLabs to power a broad range of applications in social media and e-commerce."
]
}
|
1308.0656
|
1581216890
|
Implementing even a conceptually simple web application requires an inordinate amount of time. FORWARD addresses three problems that reduce developer productivity: (a) Impedance mismatch across the multiple languages used at different tiers of the application architecture. (b) Distributed data access across the multiple data sources of the application (SQL database, user input of the browser page, session data in the application server, etc). (c) Asynchronous, incremental modification of the pages, as performed by Ajax actions. FORWARD belongs to a novel family of web application frameworks that attack impedance mismatch by offering a single unifying language. FORWARD's language is SQL++, a minimally extended SQL. FORWARD's architecture is based on two novel cornerstones: (a) A Unified Application State (UAS), which is a virtual database over the multiple data sources. The UAS is accessed via distributed SQL++ queries, therefore resolving the distributed data access problem. (b) Declarative page specifications, which treat the data displayed by pages as rendered SQL++ page queries. The resulting pages are automatically incrementally modified by FORWARD. User input on the page becomes part of the UAS. We show that SQL++ captures the semi-structured nature of web pages and subsumes the data models of two important data sources of the UAS: SQL databases and JavaScript components. We show that simple markup is sufficient for creating Ajax displays and for modeling user input on the page as UAS data sources. Finally, we discuss the page specification syntax and semantics that are needed in order to avoid race conditions and conflicts between the user input and the automated Ajax page modifications. FORWARD has been used in the development of eight commercial and academic applications. An alpha-release web-based IDE (itself built in FORWARD) enables development in the cloud.
|
Flapjax @cite_4 is a language that compiles into JavaScript, and provides automatic incremental modification of pages through functional reactive programming (FRP). The language offers primitives for event streams and behaviors, which allows the developer to specify pages that are reactive. Since values are automatically updated when their data dependencies change, developers do not need to provide code for incremental modification of pages. Flapjax's reactive semantics also apply when integrating JavaScript components from third-party libraries. However, since Flapjax is browser-centric, it is orthogonal to incremental computation in the server and database tiers.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2171267342"
],
"abstract": [
"This paper presents Flapjax, a language designed for contemporary Web applications. These applications communicate with servers and have rich, interactive interfaces. Flapjax provides two key features that simplify writing these applications. First, it provides event streams, a uniform abstraction for communication within a program as well as with external Web services. Second, the language itself is reactive: it automatically tracks data dependencies and propagates updates along those dataflows. This allows developers to write reactive interfaces in a declarative and compositional style. Flapjax is built on top of JavaScript. It runs on unmodified browsers and readily interoperates with existing JavaScript code. It is usable as either a programming language (that is compiled to JavaScript) or as a JavaScript library, and is designed for both uses. This paper presents the language, its design decisions, and illustrative examples drawn from several working Flapjax applications."
]
}
|
1308.1224
|
1813221039
|
In this work, a benchmark to evaluate the retrieval performance of soundtrack recommendation systems is proposed. Such systems aim at finding songs that are played as background music for a given set of images. The proposed benchmark is based on preference judgments, where relevance is considered a continuous ordinal variable and judgments are collected for pairs of songs with respect to a query (i.e., set of images). To capture a wide variety of songs and images, we use a large space of possible music genres, different emotions expressed through music, and various query-image themes. The benchmark consists of two types of relevance assessments: (i) judgments obtained from a user study, that serve as a "gold standard" for (ii) relevance judgments gathered through Amazon's Mechanical Turk. We report on an analysis of relevance judgments based on different levels of user agreement and investigate the performance of two state-of-the-art soundtrack recommendation systems using the proposed benchmark.
|
For information retrieval tasks, Thomas and Hawking @cite_16 use pairwise comparisons in order to compare systems in real settings, where interactive retrieval is used in specific context over ever-changing heterogeneous data. They show that click-through data highly correlates with perceived preference judgments. @cite_12 employ pairwise comparisons with Amazon's Mechanical Turk @cite_15 to obtain the correlation between user preference for text retrieval results and the effectiveness measures computed from a test collection. The result of their study shows that Normalized Discounted Cumulative Gain (NDCG) @cite_8 is the measure that correlates best with the user perceived quality. Preference judgments between blocks of results are used by @cite_40 to evaluate aggregated search results. In this work, the small number of such blocks enabled the collection of preferences between all pairs of blocks. A suitable effectiveness measure in this case is the distance between the ranking produced by the system and the reference ranking created based on the all-pair preferences. In our setting, the huge number of possible pairs prohibits an exhaustive evaluation, in which case the quality measure is more appropriate based directly on pairwise comparisons rather than using the reference ranking.
|
{
"cite_N": [
"@cite_8",
"@cite_40",
"@cite_15",
"@cite_16",
"@cite_12"
],
"mid": [
"2069870183",
"1586685689",
"",
"2133968343",
"2116008435"
],
"abstract": [
"Modern large retrieval environments tend to overwhelm their users by their large output. Since all documents are not of equal relevance to their users, highly relevant documents should be identified and ranked first for presentation. In order to develop IR techniques in this direction, it is necessary to develop evaluation approaches and methods that credit IR methods for their ability to retrieve highly relevant documents. This can be done by extending traditional evaluation methods, that is, recall and precision based on binary relevance judgments, to graded relevance judgments. Alternatively, novel measures based on graded relevance judgments may be developed. This article proposes several novel measures that compute the cumulative gain the user obtains by examining the retrieval result up to a given ranked position. The first one accumulates the relevance scores of retrieved documents along the ranked result list. The second one is similar but applies a discount factor to the relevance scores in order to devaluate late-retrieved documents. The third one computes the relative-to-the-ideal performance of IR techniques, based on the cumulative gain they are able to yield. These novel measures are defined and discussed and their use is demonstrated in a case study using TREC data: sample system run results for 20 queries in TREC-7. As a relevance base we used novel graded relevance judgments on a four-point scale. The test results indicate that the proposed measures credit IR methods for their ability to retrieve highly relevant documents and allow testing of statistical significance of effectiveness differences. The graphs based on the measures also provide insight into the performance IR techniques and allow interpretation, for example, from the user point of view.",
"Aggregated search is the task of incorporating results from different specialized search services, or verticals, into Web search results. While most prior work focuses on deciding which verticals to present, the task of deciding where in the Web results to embed the vertical results has received less attention. We propose a methodology for evaluating an aggregated set of results. Our method elicits a relatively small number of human judgements for a given query and then uses these to facilitate a metric-based evaluation of any possible presentation for the query. An extensive user study with 13 verticals confirms that, when users prefer one presentation of results over another, our metric agrees with the stated preference. By using Amazon's Mechanical Turk, we show that reliable assessments can be obtained quickly and inexpensively.",
"",
"Familiar evaluation methodologies for information retrieval (IR) are not well suited to the task of comparing systems in many real settings. These systems and evaluation methods must support contextual, interactive retrieval over changing, heterogeneous data collections, including private and confidential information.We have implemented a comparison tool which can be inserted into the natural IR process. It provides a familiar search interface, presents a small number of result sets in side-by-side panels, elicits searcher judgments, and logs interaction events. The tool permits study of real information needs as they occur, uses the documents actually available at the time of the search, and records judgments taking into account the instantaneous needs of the searcher.We have validated our proposed evaluation approach and explored potential biases by comparing different whole-of-Web search facilities using a Web-based version of the tool. In four experiments, one with supplied queries in the laboratory and three with real queries in the workplace, subjects showed no discernable left-right bias and were able to reliably distinguish between high- and low-quality result sets. We found that judgments were strongly predicted by simple implicit measures.Following validation we undertook a case study comparing two leading whole-of-Web search engines. The approach is now being used in several ongoing investigations.",
"This paper presents results comparing user preference for search engine rankings with measures of effectiveness computed from a test collection. It establishes that preferences and evaluation measures correlate: systems measured as better on a test collection are preferred by users. This correlation is established for both \"conventional web retrieval\" and for retrieval that emphasizes diverse results. The nDCG measure is found to correlate best with user preferences compared to a selection of other well known measures. Unlike previous studies in this area, this examination involved a large population of users, gathered through crowd sourcing, exposed to a wide range of retrieval systems, test collections and search tasks. Reasons for user preferences were also gathered and analyzed. The work revealed a number of new results, but also showed that there is much scope for future work refining effectiveness measures to better capture user preferences."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.