aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1204.6691
|
1500216327
|
Cloud computing is revolutionizing the ICT landscape by providing scalable and efficient computing resources on demand. The ICT industry --- especially data centers, are responsible for considerable amounts of CO2 emissions and will very soon be faced with legislative restrictions, such as the Kyoto protocol, defining caps at different organizational levels (country, industry branch etc.) A lot has been done around energy efficient data centers, yet there is very little work done in defining flexible models considering CO2. In this paper we present a first attempt of modeling data centers in compliance with the Kyoto protocol. We discuss a novel approach for trading credits for emission reductions across data centers to comply with their constraints. CO2 caps can be integrated with Service Level Agreements and juxtaposed to other computing commodities (e.g. computational power, storage), setting a foundation for implementing next-generation schedulers and pricing models that support Kyoto-compliant CO2 trading schemes.
|
In @cite_16 the HGreen heuristic is proposed to schedule batch jobs on the greenest resource first, based on prior energy efficiency benchmarking of all the nodes, but not how to optimize a job once it is allocated to a node - how much of its resources is it allowed to consume. A similar multiple-node-oriented scheduling algorithm is presented in @cite_9 .
|
{
"cite_N": [
"@cite_9",
"@cite_16"
],
"mid": [
"2080019938",
"2047307548"
],
"abstract": [
"Currently, more and more vendors such as Amazon, Google, IBM and Microsoft are dedicated to developing their cloud platforms for increasing large-scale data and more complex software systems. The cloud computing technique is rapidly changing the computing environment for various applications. However, a large number of cloud servers consume massive energy and produce huge pollution. The Smart2020 analysis shows that cloud-based computing data center and the telecommunication network will generate emission about 7 and 5 each year in 2002 and 2020, respectively. This paper aims to develop a new green algorithm that can help multiple CPUs in the cloud network not only complete the tasks before a deadline, but also greatly reduce the energy consumption. Our new green algorithm with human intelligence can effectively make task assignments via partial task shuffling and adjust the cloud servers' speeds through smart task allocation under the time constraint. Sufficient simulation results indicate that the new green algorithm with intelligent strategies is effective compared with a traditional method and another new method. In the future, we will apply both ancient and modern human's intelligent strategies to improve green optimization algorithms.",
"Grid computing represents the main solution to integrate distributed and heterogeneous resources in global scale. However, the infrastructure necessary for maintaining a global grid in production is huge. Such fact has led to excessive power consumption. On the other hand, most green strategies for data centers are DVS (Dynamic Voltage Scaling)-based and become difficult to implement them in global grids. This paper proposes the HGreen heuristic (Heavier Tasks on Maximum Green Resource) and defines a workflow scheduling algorithm in order to implement it on global grids. HGreen algorithm aims to prioritize energy-efficient resources and explores workflow application profiles. Simulation results have shown that the proposed algorithm can significantly reduce the power consumption in global grids."
]
}
|
1204.6691
|
1500216327
|
Cloud computing is revolutionizing the ICT landscape by providing scalable and efficient computing resources on demand. The ICT industry --- especially data centers, are responsible for considerable amounts of CO2 emissions and will very soon be faced with legislative restrictions, such as the Kyoto protocol, defining caps at different organizational levels (country, industry branch etc.) A lot has been done around energy efficient data centers, yet there is very little work done in defining flexible models considering CO2. In this paper we present a first attempt of modeling data centers in compliance with the Kyoto protocol. We discuss a novel approach for trading credits for emission reductions across data centers to comply with their constraints. CO2 caps can be integrated with Service Level Agreements and juxtaposed to other computing commodities (e.g. computational power, storage), setting a foundation for implementing next-generation schedulers and pricing models that support Kyoto-compliant CO2 trading schemes.
|
The work described in @cite_14 has the most similarities with ours, since it also balances SLA and energy constraints and even describes energy consumption using a similar, linear model motivated by dynamic voltage scaling, but no consideration of management was made inside the model.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2003604090"
],
"abstract": [
"The rapid growth in demand for computational power driven by modern service applications combined with the shift to the Cloud computing model have led to the establishment of large-scale virtualized data centers. Such data centers consume enormous amounts of electrical energy resulting in high operating costs and carbon dioxide emissions. Dynamic consolidation of virtual machines (VMs) and switching idle nodes off allow Cloud providers to optimize resource usage and reduce energy consumption. However, the obligation of providing high quality of service to customers leads to the necessity in dealing with the energy-performance trade-off. We propose a novel technique for dynamic consolidation of VMs based on adaptive utilization thresholds, which ensures a high level of meeting the Service Level Agreements (SLA). We validate the high efficiency of the proposed technique across different kinds of workloads using workload traces from more than a thousand PlanetLab servers."
]
}
|
1204.6691
|
1500216327
|
Cloud computing is revolutionizing the ICT landscape by providing scalable and efficient computing resources on demand. The ICT industry --- especially data centers, are responsible for considerable amounts of CO2 emissions and will very soon be faced with legislative restrictions, such as the Kyoto protocol, defining caps at different organizational levels (country, industry branch etc.) A lot has been done around energy efficient data centers, yet there is very little work done in defining flexible models considering CO2. In this paper we present a first attempt of modeling data centers in compliance with the Kyoto protocol. We discuss a novel approach for trading credits for emission reductions across data centers to comply with their constraints. CO2 caps can be integrated with Service Level Agreements and juxtaposed to other computing commodities (e.g. computational power, storage), setting a foundation for implementing next-generation schedulers and pricing models that support Kyoto-compliant CO2 trading schemes.
|
A good overview of cloud computing and sustainability is given in @cite_21 , with explanations of where cloud computing stands in regard to emissions. Green policies for scheduling are proposed that, if accepted by the user, could greatly increase the efficiency of cloud computing and reduce emissions. Reducing emissions is not treated as a source of profit and a possible way to balance SLA violations, though, but more of a general guideline for running the data center to stay below a certain threshold.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2099787804"
],
"abstract": [
"The use of High Performance Computing (HPC) in commercial and consumer IT applications is becoming popular. HPC users need the ability to gain rapid and scalable access to high-end computing capabilities. Cloud computing promises to deliver such a computing infrastructure using data centers so that HPC users can access applications and data from a Cloud anywhere in the world on demand and pay based on what they use. However, the growing demand drastically increases the energy consumption of data centers, which has become a critical issue. High energy consumption not only translates to high energy cost which will reduce the profit margin of Cloud providers, but also high carbon emissions which are not environmentally sustainable. Hence, there is an urgent need for energy-efficient solutions that can address the high increase in the energy consumption from the perspective of not only the Cloud provider, but also from the environment. To address this issue, we propose near-optimal scheduling policies that exploit heterogeneity across multiple data centers for a Cloud provider. We consider a number of energy efficiency factors (such as energy cost, carbon emission rate, workload, and CPU power efficiency) which change across different data centers depending on their location, architectural design, and management system. Our carbon energy based scheduling policies are able to achieve on average up to 25 of energy savings in comparison to profit based scheduling policies leading to higher profit and less carbon emissions."
]
}
|
1204.6615
|
2260503838
|
We announce a tool for mapping derivations of the E theorem prover to Mizar proofs. Our mapping complements earlier work that generates problems for automated theorem provers from Mizar inference checking problems. We describe the tool, explain the mapping, and show how we solved some of the difficulties that arise in mapping proofs between different logical formalisms, even when they are based on the same notion of logical consequence, as Mizar and E are (namely, first-order classical logic with identity).
|
An export and cross-verification of proofs by ATPs has been carried out @cite_16 . Such work is an inverse of ours because it goes from proofs to ATP problems.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"1789208025"
],
"abstract": [
"This paper is intended to be a practical reference manual for basic Mizar terminology which may be helpful to get started using the system. The paper describes most important aspects of the Mizar language as well as some features of the verification software."
]
}
|
1204.6615
|
2260503838
|
We announce a tool for mapping derivations of the E theorem prover to Mizar proofs. Our mapping complements earlier work that generates problems for automated theorem provers from Mizar inference checking problems. We describe the tool, explain the mapping, and show how we solved some of the difficulties that arise in mapping proofs between different logical formalisms, even when they are based on the same notion of logical consequence, as Mizar and E are (namely, first-order classical logic with identity).
|
We do not intend to enter into a discussion about the proof identity problem. For a discussion, see Do s en @cite_15 . Certainly the intension behind the mapping is to preserve whatever abstract proof expressed by the derivation. That the derivation and the text generated from it are isomorphic will be clarified by the discussion below of the translation algorithm. Mapping such as the one discussed here can help contribute to a concrete investigation of the proof identity problem, which in fact motivates the project reported here. The reader need not share the author's interest in the proof identity problem to understand what follows.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2068169489"
],
"abstract": [
"Some thirty years ago, two proposals were made concerning criteria for identity of proofs. Prawitz proposed to analyze identity of proofs in terms of the equivalence relation based on reduction to normal form in natural deduction. Lambek worked on a normalization proposal analogous to Prawitz's, based on reduction to cut-free form in sequent systems, but he also suggested understanding identity of proofs in terms of an equivalence relation based on generality, two derivations having the same generality if after generalizing maximally the rules involved in them they yield the same premises and conclusions up to a renaming of variables. These two proposals proved to be extensionally equivalent only for limited fragments of logic. The normalization proposal stands behind very successful applications of the typed lambda calculus and of category theory in the proof theory of intuitionistic logic. In classical logic, however, it did not fare well. The generality proposal was rather neglected in logic, though related matters were much studied in pure category theory in connection with coherence problems, and there are also links to low-dimensional topology and linear algebra. This proposal seems more promising than the other one for the general proof theory of classical logic. ?1. General proof theory. When a question is not in the main stream of mathematical investigations, and is also of a certain general, conceptual, kind, it runs the danger of being dismissed as \"philosophical\". Before the advent of recursion theory, many mathematicians would presumably have dismissed the question \"What is a computable function?\" as a philosophical question (or perhaps even as a psychological, empirical, question). It required something like the enthusiasm of a young discipline on the rise, which logic was in the first half of the twentieth century, for such a question to be embraced as legitimate, and seriously treated by mathematical means-with excellent results. An outsider might suppose that the question \"What is a proof?\" should be important for a field called proof theory, and then he would be surprised to find that this and related questions, one of which will occupy us here, Received on May 27, 2003; revised July 4, 2003; revised August 26, 2003. 2000 Mathematics Subject Classification. 03F03, 03F07, 03A05, 03-03."
]
}
|
1204.6509
|
1797760194
|
We introduce in this paper a new way of optimizing the natural extension of the quantization error using in k-means clustering to dissimilarity data. The proposed method is based on hierarchical clustering analysis combined with multi-level heuristic refinement. The method is computationally efficient and achieves better quantization errors than the
|
Multi-level heuristics originate from graph clustering for which they give some of the best results (see @cite_2 for the minimum cut partitioning problem, and @cite_6 for the detection of communities in a network). MLR has also been considered for dissimilarity data in @cite_5 as a way of improving HCA results. In this paper, the authors extract a @math -nearest neighbour graphs from the dissimilarity matrix. They apply a standard HCA on the graph using a variant of the average linkage criterion. Then they apply a multi-level refinement approach to the hierarchical clustering using an error measure close to the quantity @math used here. Our proposal differs in using the full dissimilarity matrix, in relying on the standard quantization error for dissimilarity data in all the phases of the algorithm and in leveraging Mllner's efficient HCA.
|
{
"cite_N": [
"@cite_5",
"@cite_6",
"@cite_2"
],
"mid": [
"",
"2952040885",
"2118953734"
],
"abstract": [
"",
"Modularity is one of the most widely used quality measures for graph clusterings. Maximizing modularity is NP-hard, and the runtime of exact algorithms is prohibitive for large graphs. A simple and effective class of heuristics coarsens the graph by iteratively merging clusters (starting from singletons), and optionally refines the resulting clustering by iteratively moving individual vertices between clusters. Several heuristics of this type have been proposed in the literature, but little is known about their relative performance. This paper experimentally compares existing and new coarsening- and refinement-based heuristics with respect to their effectiveness (achieved modularity) and efficiency (runtime). Concerning coarsening, it turns out that the most widely used criterion for merging clusters (modularity increase) is outperformed by other simple criteria, and that a recent algorithm by Schuetz and Caflisch is no improvement over simple greedy coarsening for these criteria. Concerning refinement, a new multi-level algorithm is shown to produce significantly better clusterings than conventional single-level algorithms. A comparison with published benchmark results and algorithm implementations shows that combinations of coarsening and multi-level refinement are competitive with the best algorithms in the literature.",
"The graph partitioning problem is that of dividing the vertices of a graph into sets of specified sizes such that few edges cross between sets. This NP-complete problem arises in many important scientific and engineering problems. Prominent examples include the decomposition of data structures for parallel computation, the placement of circuit elements and the ordering of sparse matrix computations. We present a multilevel algorithm for graph partitioning in which the graph is approximated by a sequence of increasingly smaller graphs. The smallest graph is then partitioned using a spectral method, and this partition is propagated back through the hierarchy of graphs. A variant of the Kernighan-Lin algorithm is applied periodically to refine the partition. The entire algorithm can be implemented to execute in time proportional to the size of the original graph. Experiments indicate that, relative to other advanced methods, the multilevel algorithm produces high quality partitions at low cost."
]
}
|
1204.6216
|
1981784948
|
We introduce the heat method for computing the geodesic distance to a specified subset (e.g., point or curve) of a given domain. The heat method is robust, efficient, and simple to implement since it is based on solving a pair of standard linear elliptic problems. The resulting systems can be prefactored once and subsequently solved in near-linear time. In practice, distance is updated an order of magnitude faster than with state-of-the-art methods, while maintaining a comparable level of accuracy. The method requires only standard differential operators and can hence be applied on a wide variety of domains (grids, triangle meshes, point clouds, etc.). We provide numerical evidence that the method converges to the exact distance in the limit of refinement; we also explore smoothed approximations of distance suitable for applications where greater regularity is required.
|
The prevailing approach to distance computation is to solve the subject to boundary conditions ( |_ = 0 ) over some subset ( ) of the domain. This formulation is and , making it difficult to solve directly. Typically one applies an iterative relaxation scheme such as Gauss-Seidel -- special update orders are known as and , which are some of the most popular algorithms for distance computation on regular grids @cite_20 and triangulated surfaces @cite_16 . These algorithms can also be used on implicit surfaces @cite_15 , point clouds @cite_25 , and polygon soup @cite_29 , but only indirectly: distance is computed on a simplicial mesh or regular grid that approximates the original domain. Implementation of fast marching on simplicial grids is challenging due to the need for nonobtuse triangulations (which are notoriously difficult to obtain) or else a complex unfolding procedure to preserve monotonicity of the solution; moreover these issues are not well-studied in dimensions greater than two. Fast marching and fast sweeping have asymptotic complexity of (O(n n) ) and (O(n) ), respectively, but sweeping is often slower due to the large number of sweeps required to obtain accurate results @cite_12 .
|
{
"cite_N": [
"@cite_29",
"@cite_12",
"@cite_15",
"@cite_16",
"@cite_25",
"@cite_20"
],
"mid": [
"1966352289",
"45207152",
"1996204208",
"1912799504",
"2035553724",
"1238092070"
],
"abstract": [
"Efficient methods to compute intrinsic distances and geodesic paths have been presented for various types of surface representations, most importantly polygon meshes. These meshes are usually assumed to be well-structured and manifold. In practice, however, they often contain defects like holes, gaps, degeneracies, non-manifold configurations – or they might even be just a soup of polygons. The task of repairing these defects is computationally complex and in many cases exhibits various ambiguities demanding tedious manual efforts. We present a computational framework that enables the computation of meaningful approximate intrinsic distances and geodesic paths on raw meshes in a way which is tolerant to such defects. Holes and gaps are bridged up to a user-specified tolerance threshold such that distances can be computed plausibly even across multiple connected components of inconsistent meshes. Further, we show ways to locally parameterize a surface based on geodesic distance fields, easily facilitating the application of textures and decals on raw meshes. We do all this without explicitly repairing the input, thereby avoiding the costly additional efforts. In order to enable broad applicability we provide details on two implementation variants, one optimized for performance, the other optimized for memory efficiency. Using the presented framework many applications can readily be extended to deal with imperfect meshes. Since we abstract from the input applicability is not even limited to meshes, other representations can be handled as well.",
"This paper presents a study of the computational efficiency, t hat is accuracy versus computational effort, for solving the Eikonal equation on quadrilateral grids. The algorithms that are benchmarked against each other for computations of distance functions are the following: the fast marching method, the fast sweeping method, an algebraic Newton method, and also a \"brute force\" approach. Some comments are also made on the solution of the Eikonal equation via reformulation to a hyperbolic PDE. The results of the benchmark clearly indicate that the fast marching method is the preferred algorithm due to both accuracy and computational speed in our tested context.",
"An algorithm for the computationally optimal construction of intrinsic weighted distance functions on implicit hyper-surfaces is introduced in this paper. The basic idea is to approximate the intrinsic weighted distance by the Euclidean weighted distance computed in a band surrounding the implicit hyper-surface in the embedding space, thereby performing all the computations in a Cartesian grid with classical and efficient numerics. Based on work on geodesics on Riemannian manifolds with boundaries, we bound the error between the two distance functions. We show that this error is of the same order as the theoretical numerical error in computationally optimal, Hamilton‐Jacobi-based, algorithms for computing distance functions in Cartesian grids. Therefore, we can use these algorithms, modified to deal with spaces with boundaries, and obtain also for the case of intrinsic distance functions on implicit hyper-surfaces a computationally efficient technique. The approach can be extended to solve a more general class of Hamilton‐Jacobi equations defined on the implicit surface, following the same idea of approximating their solutions by the solutions in the embedding Euclidean space. The framework here introduced thereby allows for the computations to be performed on a Cartesian grid with computationally optimal algorithms, in spite of the fact that the distance and Hamilton‐Jacobi equations are intrinsic to the implicit hyper-surface. c ∞ 2001 Academic Press",
"In this paper, we propose fast and accurate algorithms to remesh and flatten a genus-0 triangulated manifold. These methods naturally fits into a framework for 3D geometry modeling and processing that uses only fast geodesic computations. These techniques are gathered and extended from classical areas such as image processing or statistical perceptual learning. Using the Fast Marching algorithm, we are able to recast these powerful tools in the language of mesh processing. Thanks to some classical geodesic-based building blocks, we are able to derive a flattening method that exhibit a conservation of local structures of the surface. On large meshes (more than 500 000 vertices), our techniques speed up computation by over one order of magnitude in comparison to classical remeshing and parameterization methods. Our methods are easy to implement and do not need multilevel solvers to handle complex models that may contain poorly shaped triangles.",
"A theoretical and computational framework for computing intrinsic distance functions and geodesics on submanifolds of @math given by point clouds is introduced and developed in this paper. The basic idea is that, as shown here, intrinsic distance functions and geodesics on general co-dimension submanifolds of @math can be accurately approximated by extrinsic Euclidean ones computed inside a thin offset band surrounding the manifold. This permits the use of computationally optimal algorithms for computing distance functions in Cartesian grids. We use these algorithms, modified to deal with spaces with boundaries, and obtain a computationally optimal approach also for the case of intrinsic distance functions on submanifolds of @math . For point clouds, the offset band is constructed without the need to explicitly find the underlying manifold, thereby computing intrinsic distance functions and geodesics on point clouds while skipping the manifold reconstruction step. The case of poin...",
"Introduction 1. Formulations of interface propagation Part I. Theory and Algorithms: 2. Theory of curve and surface evolution 3. Hamilton-Jacobi equations and associated theory 4. Numerical approximations: first attempt 5. Numerical schemes for hyperbolic conservation laws 6. Algorithms for the initial and boundary value formulations 7. Efficient schemes: adaptivity 8. Triangulated versions of level set and fast marching method: extensions and variations 9. Tests of basic methods Part II. Applications: 10. Geometry 11. Grid generation 12 Image denoising 13. Computer vision: shape detection and recognition 14. Fluid mechanics and materials sciences: adding physics 15. Computational geometry and computer-aided-design 16. First arrivals, optimizations, and control 17. Applications to semi-conductor manufacturing 18. Comments, conclusions, future directions References Index."
]
}
|
1204.6216
|
1981784948
|
We introduce the heat method for computing the geodesic distance to a specified subset (e.g., point or curve) of a given domain. The heat method is robust, efficient, and simple to implement since it is based on solving a pair of standard linear elliptic problems. The resulting systems can be prefactored once and subsequently solved in near-linear time. In practice, distance is updated an order of magnitude faster than with state-of-the-art methods, while maintaining a comparable level of accuracy. The method requires only standard differential operators and can hence be applied on a wide variety of domains (grids, triangle meshes, point clouds, etc.). We provide numerical evidence that the method converges to the exact distance in the limit of refinement; we also explore smoothed approximations of distance suitable for applications where greater regularity is required.
|
In a different development, Mitchell al give an (O(n^2 n) ) algorithm for computing the exact polyhedral distance from a single source to all other vertices of a triangulated surface. Surazhsky al demonstrate that this algorithm tends to run in sub-quadratic time in practice, and present an approximate (O(n n) ) version of the algorithm with guaranteed error bounds; Bommes and Kobbelt extend the algorithm to polygonal sources. Similar to fast marching, these algorithms propagate distance information in wavefront order using a priority queue, again making them difficult to parallelize. More importantly, the amortized cost of these algorithms (over many different source subsets ( )) is substantially greater than for the heat method since they do not reuse information from one subset to the next. Finally, although @cite_9 greatly simplifies the original formulation, these algorithms remain challenging to implement and do not immediately generalize to domains other than triangle meshes.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2161253909"
],
"abstract": [
"The computation of geodesic paths and distances on triangle meshes is a common operation in many computer graphics applications. We present several practical algorithms for computing such geodesics from a source point to one or all other points efficiently. First, we describe an implementation of the exact \"single source, all destination\" algorithm presented by Mitchell, Mount, and Papadimitriou (MMP). We show that the algorithm runs much faster in practice than suggested by worst case analysis. Next, we extend the algorithm with a merging operation to obtain computationally efficient and accurate approximations with bounded error. Finally, to compute the shortest path between two given points, we use a lower-bound property of our approximate geodesic algorithm to efficiently prune the frontier of the MMP algorithm. thereby obtaining an exact solution even more quickly."
]
}
|
1204.5853
|
1483246560
|
Simultaneous embedding is concerned with simultaneously representing a series of graphs sharing some or all vertices. This forms the basis for the visualization of dynamic graphs and thus is an important field of research. Recently there has been a great deal of work investigating simultaneous embedding problems both from a theoretical and a practical point of view. We survey recent work on this topic.
|
An interesting additional restriction to SGE s was considered by @cite_34 , combining SGE with the RAC drawing convention (RAC -- Right-Angular Crossing). They try to find an SGE such that crossings between exclusive edges of different graphs are restricted to right-angular crossings. consider only the case where the edge sets of both graphs are disjoint. They present one negative and one positive result for this problem. The negative result consists of a wheel and a cycle not admitting an SGE with right-angular crossings. On the other hand they show the existence of such a drawing on a small integer grid for the case that one of the graphs is a path or a cycle and the other is a matching. Moreover, they give a linear-time algorithm to compute such a drawing.
|
{
"cite_N": [
"@cite_34"
],
"mid": [
"2107611155"
],
"abstract": [
"In this paper, we study the geometric RAC simultaneous drawing problem: Given two planar graphs that share a common vertex set but have disjoint edge sets, a geometric RAC simultaneous drawing is a straight-line drawing in which (i) each graph is drawn planar, (ii) there are no edge overlaps, and, (iii) crossings between edges of the two graphs occur at right-angles. We first prove that two planar graphs admitting a geometric simultaneous drawing may not admit a geometric RAC simultaneous drawing. We further show that a cycle and a matching always admit a geometric RAC simultaneous drawing, which can be constructed in linear time."
]
}
|
1204.5853
|
1483246560
|
Simultaneous embedding is concerned with simultaneously representing a series of graphs sharing some or all vertices. This forms the basis for the visualization of dynamic graphs and thus is an important field of research. Recently there has been a great deal of work investigating simultaneous embedding problems both from a theoretical and a practical point of view. We survey recent work on this topic.
|
A result not really fitting in one of the three above classes by @cite_7 considers the restricted case of SEFE where each edge has to be a sequence of horizontal and vertical segments with at most one bend per edge. They show that two graphs with maximum degree 2 always admit such a SEFE on a grid of size @math by adapting their linear-time algorithm computing an SGE for these types of graphs (on a larger grid).
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2949400914"
],
"abstract": [
"We prove that the geometric thickness of graphs whose maximum degree is no more than four is two. All of our algorithms run in O(n) time, where n is the number of vertices in the graph. In our proofs, we present an embedding algorithm for graphs with maximum degree three that uses an n x n grid and a more complex algorithm for embedding a graph with maximum degree four. We also show a variation using orthogonal edges for maximum degree four graphs that also uses an n x n grid. The results have implications in graph theory, graph drawing, and VLSI design."
]
}
|
1204.5853
|
1483246560
|
Simultaneous embedding is concerned with simultaneously representing a series of graphs sharing some or all vertices. This forms the basis for the visualization of dynamic graphs and thus is an important field of research. Recently there has been a great deal of work investigating simultaneous embedding problems both from a theoretical and a practical point of view. We survey recent work on this topic.
|
@cite_3 consider the case where the embedding of each of the input graphs is already fixed. With this restriction SEFE becomes trivial for two graphs since it remains to test whether the two graphs induce the same embedding on the common graph. They show that it can also be decided efficiently for three graphs. However, it becomes NP-hard for at least fourteen graphs. They also consider the problem SGE for the case that the embedding of each graph is fixed and show that it is NP-hard for at least thirteen graphs.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"81810990"
],
"abstract": [
"Given k planar graphs G1,…,Gk, deciding whether they admit a simultaneous embedding with fixed edges (Sefe ) and whether they admit a simultaneous geometric embedding (Sge ) are NP-hard problems, for k≥3 and for k≥2, respectively. In this paper we consider the complexity of Sefe and of Sge when the graphs G1,…,Gk have a fixed planar embedding. In sharp contrast with the NP-hardness of Sefe for three non-embedded graphs, we show that Sefe is polynomial-time solvable for three graphs with a fixed planar embedding. Furthermore, we show that, given k embedded planar graphs G1,…,Gk, deciding whether a Sefe of G1,…,Gk exists and deciding whether an Sge of G1,…,Gk exists are NP-hard problems, for k≥14 and k≥13, respectively."
]
}
|
1204.5853
|
1483246560
|
Simultaneous embedding is concerned with simultaneously representing a series of graphs sharing some or all vertices. This forms the basis for the visualization of dynamic graphs and thus is an important field of research. Recently there has been a great deal of work investigating simultaneous embedding problems both from a theoretical and a practical point of view. We survey recent work on this topic.
|
Schaefer @cite_75 shows that several other notions of planarity are related to SEFE . In particular, the well-studied cluster planarity problem reduces to SEFE , providing further incentive to study its complexity.
|
{
"cite_N": [
"@cite_75"
],
"mid": [
"1598502370"
],
"abstract": [
"We study Hanani-Tutte style theorems for various notions of planarity, including partially embedded planarity, and simultaneous planarity. This approach brings together the combinatorial, computational and algebraic aspects of planarity notions and may serve as a uniform foundation for planarity, as suggested in the writings of Tutte and Wu."
]
}
|
1204.6049
|
1992292413
|
This paper investigates the effect of sub-Nyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear time-invariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of right-invertible time-preserving sampling methods which includes irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling grid employed at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while are typi- cally complicated to realize in practice, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain in this scenario, which is in contrast to the benefits obtained from random mixing in spectrum-blind compressive sampling schemes.
|
Shannon introduced and derived the information theoretic metric of channel capacity for time-invariant analog waveform channels @cite_32 , which established the optimality of water-filling power allocation based on signal-to-noise ratio (SNR) over the spectral domain @cite_20 @cite_10 . A key idea in determining the analog channel capacity is to convert the continuous-time channel into a set of parallel discrete-time channels based on the Shannon-Nyquist sampling theorem @cite_46 . This paradigm was employed, for example, by Medard et. al. to bound the maximum mutual information in time-varying channels @cite_40 @cite_23 , and was used by Forneyet. al. to investigate coding and modulation for Gaussian channels @cite_5 . Most of these results focus on the analog channel capacity commensurate with uniform sampling at or above the Nyquist rate associated with the channel bandwidth. There is another line of work that characterizes the effects upon information rates of oversampling with quantization @cite_26 @cite_47 . In practice, however, hardware and power limitations may preclude sampling at the Nyquist rate for a wideband communication system.
|
{
"cite_N": [
"@cite_47",
"@cite_26",
"@cite_32",
"@cite_40",
"@cite_23",
"@cite_5",
"@cite_46",
"@cite_10",
"@cite_20"
],
"mid": [
"2104274980",
"2103898286",
"2041905826",
"2120686080",
"2152471841",
"2103604544",
"2130917709",
"1983491942",
"2142901448"
],
"abstract": [
"A sequence of binary ( spl plusmn 1) random variables is generated by sampling a hard-limited version of a bandlimited process at n times the Nyquist rate. The information rate spl Iscr carried by these binary samples is investigated. It is shown by constructing a specific nonstationary, bounded, bandlimited process (the real zeros of which are independent and identically distributed, isolated, and lying in different Nyquist intervals) that spl Iscr =log sub 2 (n+1) bits per Nyquist interval is achievable. A more complicated construction in which L distinct zeros are placed in L consecutive Nyquist intervals yields achievable rates that approach (for L spl rarr spl infin ) spl Iscr arbitrarily closely, where spl Iscr =log sub 2 n + (n-1)log sub 2 [n (n-1)], n spl ges 2 (and spl Iscr =1 for n=1 and L=1). By exploiting the constraints imposed on the autocorrelation function of a stationary sign (bilevel) process with a given average transition rate, the latter expression is shown also to yield an upper bound on the achievable values of spl Iscr . The logarithmic behavior with n (n spl Gt 1) is due to the high correlation between the oversampled binary samples, and it is established that this trend is also achievable with stationary sign processes. This model may be used to gain insight into the effect of finite resolution on the information (in Shannon's sense) conveyed by the sign of a bandlimited process, and also to assess the limiting performance of certain oversampling-based communication systems. >",
"A noiseless ideal low-pass filter, followed by a limiter, is normally used as a binary data channel by sampling the output once per Nyquist interval. Detectors that sample more often encounter intersymbol interference, but can be used in ways that increase the information rate. A signaling system that achieves an information rate of 1.072 b Nyquist interval by allowing the detector to sample at times a half Nyquist interval apart is presented. This rate increase is small, but it shows that, in principle, oversampling does permit faster rates. Another signaling system that more closely resembles a digital data system is also presented; it has rate 1.050. By allowing a more irregular pattern of sampling times, the rate is increased to 1.090 (or 1.062 for the 'digital' system). >",
"",
"We present a model for time-varying communication single-access and multiple-access channels without feedback. We consider the difference between mutual information when the receiver knows the channel perfectly and mutual information when the receiver only has an estimate of the channel. We relate the variance of the channel measurement error at the receiver to upper and lower bounds for this difference in mutual information. We illustrate the use of our bounds on a channel modeled by a Gauss-Markov process, measured by a pilot tone. We relate the rate of time variation of the channel to the loss in mutual information due to imperfect knowledge of the measured channel.",
"We show that very large bandwidths on fading multipath channels cannot be effectively utilized by spread-spectrum systems that (in a particular sense) spread the available power uniformly over both time and frequency. The approach is to express the input process as an expansion in an orthonormal set of functions each localized in time and frequency. The fourth moment of each coefficient in this expansion is then uniformly constrained. We show that such a constraint forces the mutual information to 0 inversely with increasing bandwidth. Simply constraining the second moment of these coefficients does not achieve this effect. The results suggest strongly that conventional direct-sequence code-division multiple-access (CDMA) systems do not scale well to extremely large bandwidths. To illustrate how the interplay between channel estimation and symbol detection affects capacity, we present results for a specific channel and CDMA signaling scheme.",
"Shannon's determination of the capacity of the linear Gaussian channel has posed a magnificent challenge to succeeding generations of researchers. This paper surveys how this challenge has been met during the past half century. Orthogonal minimum-bandwidth modulation techniques and channel capacity are discussed. Binary coding techniques for low-signal-to-noise ratio (SNR) channels and nonbinary coding techniques for high-SNR channels are reviewed. Recent developments, which now allow capacity to be approached on any linear Gaussian channel, are surveyed. These new capacity-approaching techniques include turbo coding and decoding, multilevel coding, and combined coding precoding for intersymbol-interference channels.",
"This paper is concerned with various aspects of the characterization of randomly time-variant linear channels. At the outset it is demonstrated that time-varying linear channels (or filters) may be characterized in an interesting symmetrical manner in time and frequency variables by arranging system functions in (timefrequency) dual pairs. Following this a statistical characterization of randomly time-variant linear channels is carried out in terms of correlation functions for the various system functions. These results are specialized by considering three classes of practically interesting channels. These are the wide-sense stationary (WSS) channel, the uncorrelated scattering (US) channel, and the wide-sense stationary uncorrelated scattering (WSSUS) channel. The WSS and US channels are shown to be (time-frequency) duals. Previous discussions of channel correlation functions and their relationships have dealt exclusively with the WSSUS channel. The point of view presented here of dealing with the dually related system functions and starting with the unrestricted linear channels is considerably more general and places in proper perspective previous results on the WSSUS channel. Some attention is given to the problem of characterizing radio channels. A model called the Quasi-WSSUS channel is presented to model the behavior of such channels. All real-life channels and signals have an essentially finite number of degrees of freedom due to restrictions on time duration and bandwidth. This fact may be used to derive useful canonical channel models with the aid of sampling theorems and power series expansions. Several new canonical channel models are derived in this paper, some of which are dual to those of Kailath.",
"The discrete-time Gaussian channel with intersymbol interference (ISI) where the inputs are subject to a per symbol average-energy constraint is considered. The capacity of this channel is derived by means of a hypothetical channel model called the N-circular Gaussian channel (NCGC), whose capacity is readily derived using the theory of the discrete Fourier transform. The results obtained for the NCGC are used further to prove that, in the limit of increasing block length N, the capacity of the discrete-time Gaussian channel (DTGC) with ISI using a per block average-energy input constraint (N-block DTGC) is indeed also the capacity when using the per symbol average-energy constraint. >",
"Communication Systems and Information Theory. A Measure of Information. Coding for Discrete Sources. Discrete Memoryless Channels and Capacity. The Noisy-Channel Coding Theorem. Techniques for Coding and Decoding. Memoryless Channels with Discrete Time. Waveform Channels. Source Coding with a Fidelity Criterion. Index."
]
}
|
1204.6049
|
1992292413
|
This paper investigates the effect of sub-Nyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear time-invariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of right-invertible time-preserving sampling methods which includes irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling grid employed at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while are typi- cally complicated to realize in practice, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain in this scenario, which is in contrast to the benefits obtained from random mixing in spectrum-blind compressive sampling schemes.
|
More general irregular sampling methods beyond pointwise uniform sampling have been extensively studied in the sampling literature, e.g. @cite_18 @cite_7 @cite_8 . One example is sampling on non-periodic quasi-crystal sets, which has been shown to be stable for bandlimited signals @cite_12 @cite_21 . These sampling approaches are of interest in some realistic situations where signals are only sampled at a nonuniformly spaced sampling set due to constraints imposed by data acquisition devices. Many sophisticated reconstruction algorithms have been developed for the class of bandlimited signals or, more generally, the class of shift-invariant signals @cite_6 @cite_27 @cite_18 . For all these nonuniform sampling methods, the Nyquist sampling rate is necessary for perfect recovery of bandlimited signals @cite_42 @cite_41 @cite_18 .
|
{
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_41",
"@cite_21",
"@cite_42",
"@cite_6",
"@cite_27",
"@cite_12"
],
"mid": [
"2004421506",
"1486874193",
"2135486871",
"2084052726",
"2963984177",
"161734136",
"2122735658",
"48974879",
"2001834269"
],
"abstract": [
"This article discusses modern techniques for nonuniform sampling and reconstruction of functions in shift-invariant spaces. It is a survey as well as a research paper and provides a unified framework for uniform and nonuniform sampling and reconstruction in shift-invariant subspaces by bringing together wavelet theory, frame theory, reproducing kernel Hilbert spaces, approximation theory, amalgam spaces, and sampling. Inspired by applications taken from communication, astronomy, and medicine, the following aspects will be emphasized: (a) The sampling problem is well defined within the setting of shift-invariant spaces. (b) The general theory works in arbitrary dimension and for a broad class of generators. (c) The reconstruction of a function from any sufficiently dense nonuniform sampling set is obtained by efficient iterative algorithms. These algorithms converge geometrically and are robust in the presence of noise. (d) To model the natural decay conditions of real signals and images, the sampling theory is developed in weighted L p-spaces.",
"Bases in Banach Spaces - Schauder Bases Schauder's Basis for C[a,b] Orthonormal Bases in Hilbert Space The Reproducing Kernel Complete Sequences The Coefficient Functionals Duality Riesz Bases The Stability of Bases in Banach Spaces The Stability of Orthonormal Bases in Hilbert Space Entire Functions of Exponential Type The Classical Factorization Theorems - Weierstrass's Factorization Theorem Jensen's Formula Functions of Finite Order Estimates for Canonical Products Hadamard's Factorization Theorem Restrictions Along a Line - The \"Phragmen-Lindelof\" Method Carleman's Formula Integrability on a line The Paley-Wiener Theorem The Paley-Wiener Space The Completeness of Sets of Complex Exponentials - The Trigonometric System Exponentials Close to the Trigonometric System A Counterexample Some Intrinsic Properties of Sets of Complex Exponentials Stability Density and the Completeness Radius Interpolation and Bases in Hilbert Space - Moment Sequences in Hilbert Space Bessel Sequences and Riesz-Fischer Sequences Applications to Systems of Complex Exponentials The Moment Space and Its Relation to Equivalent Sequences Interpolation in the Paley-Wiener Space: Functions of Sine Type Interpolation in the Paley-Wiener Space: Stability The Theory of Frames The Stability of Nonharmonic Fourier Series Pointwise Convergence Notes and Comments References List of Special Symbols Index",
"This paper introduces a filterbank interpretation of various sampling strategies, which leads to efficient interpolation and reconstruction methods. An identity, which is referred to as the interpolation identity, is developed and is used to obtain particularly efficient discrete-time systems for interpolation of generalized samples as well as a class of nonuniform samples, to uniform Nyquist samples, either for further processing in that form or for conversion to continuous time. The interpolation identity also leads to new sampling strategies including an extension of Papoulis' (1977) generalized sampling expansion.",
"It has been almost thirty years since Shannon introduced the sampling theorem to communications theory. In this review paper we will attempt to present the various contributions made for the sampling theorems with the necessary mathematical details to make it self-contained. We will begin by a clear statement of Shannon's sampling theorem followed by its applied interpretation for time-invariant systems. Then we will review its origin as Whittaker's interpolation series. The extensions will include sampling for functions of more than one variable, random processes, nonuniform sampling, nonband-limited functions, implicit sampling, generalized functions (distributions), sampling with the function and its derivatives as suggested by Shannon in his original paper, and sampling for general integral transforms. Also the conditions on the functions to be sampled will be summarized. The error analysis of the various sampling expansions, including specific error bounds for the truncation, aliasing, jitter and parts of various other errors will be discussed and summarized. This paper will be concluded by searching the different recent applications of the sampling theorems in other fields, besides communications theory. These include optics, crystallography, time-varying systems, boundary value problems, spline approximation, special functions, and the Fourier and other discrete transforms.",
"",
"Apparatus in a crop roll forming machine for improving the core formation of the crop rolls, reducing the amount of particles of crop material lost, and collecting particles of crop material lost from either the crop package or loose crop material during the roll formation process comprising stripping surfaces and an elongated tailgate with a collection pan. The particles are recycled from the collection pan back into the roll forming region by the cooperative interaction of the tailgate collection pan and the upper bale forming means as the bale forming means traverses a predetermined path imparting rotative motion to the crop material delivered to the roll forming region.",
"A constructive solution of the irregular sampling problem for band- limited functions is given. We show how a band-limited function can be com- pletely reconstructed from any random sampling set whose density is higher than the Nyquist rate, and give precise estimates for the speed of convergence of this iteration method. Variations of this algorithm allow for irregular sampling with derivatives, reconstruction of band-limited functions from local averages, and irregular sampling of multivariate band-limited functions. In the irregular sampling problem one is asked whether and how a band- limited function can be completely reconstructed from its irregularly sam- pled values f(xi). This has many applications in signal and image processing, seismology, meteorology, medical imaging, etc. Finding constructive solutions of this problem has received considerable attention among mathematicians and engineers. The mathematical literature provides several uniqueness results (1, 2, 17, 18, 19). It is now part of the folklore that for stable sampling the sampling rate must be at least the Nyquist rate (18). These results, as deep as they are, have had little impact for the applied sciences, because they were not constructive. If the sampling set is just a perturbation of the regular oversampling, then a reconstruction method has been obtained in a seminal paper by Duffin and Schaeffer (6) (see also (29)): if for some L > 0, a > 0, and o > 0 the sampling points xk , k e Z , satisfy (a) - ok a, k ^ I, then the norm equivalence A iR (x) ) with w < n o. This norm equivalence implies that it is possible to reconstruct through an iterative procedure, the so-called frame method. Most of the later work on constructive methods consists of variations of this method (3, 21, 22, 26). The above conditions on the sampling set exclude random irregular sampling sets, e.g., sets with regions of higher sampling density. A partial, but undesirable remedy, to handle highly irregular sampling sets, would be to force the above conditions by throwing away information on part of the points and accept a very slow convergence of the iteration.",
"This chapter presents methods for the reconstruction of bandlimited functions from irregular samples. The first part discusses algorithms for a constructive solution of the irregular sampling problem. An important aspect of these algorithms is the explicit knowledge of the constants involved and efficient error estimates. The second part discusses the numerical implementation of these algorithms and compares the performance of various reconstruction methods. Although most of the material has already appeared in print, several new results are included: (a) a new method to estimate frame bounds, (b) a reconstruction of band-limited functions from partial information, which is not just the samples; (c) a new result on the complete reconstruction of band-limited functions from local averages; (d) a systematic exposition of recent experimental results.",
"Abstract Irregular sampling and “stable sampling” of band-limited functions have been studied by H.J. Landau [H.J. Landau, Necessary density conditions for sampling and interpolation of certain entire functions, Acta Math. 117 (1967) 37–52]. We prove that quasicrystals are sets of stable sampling. To cite this article: B. Matei, Y. Meyer, C. R. Acad. Sci. Paris, Ser. I 346 (2008)."
]
}
|
1204.6049
|
1992292413
|
This paper investigates the effect of sub-Nyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear time-invariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of right-invertible time-preserving sampling methods which includes irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling grid employed at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while are typi- cally complicated to realize in practice, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain in this scenario, which is in contrast to the benefits obtained from random mixing in spectrum-blind compressive sampling schemes.
|
However, for signals with certain structure, the Nyquist sampling rate may exceed that required for perfect signal reconstruction from the samples @cite_28 @cite_17 . For example, consider multiband signals, whose spectral contents reside within several subbands over a wide spectrum. If the spectral support is known, then the necessary sampling rate for the multiband signals is their spectral occupancy, termed the @cite_29 . Such signals admit perfect recovery when sampled at rates approaching the Landau rate, provided that the sampling sets are appropriately chosen (e.g. @cite_30 @cite_9 ). One type of sampling mechanism that can reconstruct multiband signals sampled at the Landau rate is a filter bank followed by sampling, studied in @cite_15 @cite_3 @cite_36 . Inspired by recent compressive sensing'' @cite_37 @cite_39 @cite_25 ideas, spectrum-blind sub-Nyquist sampling for multiband signals with random modulation has been developed @cite_13 as well.
|
{
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_28",
"@cite_36",
"@cite_9",
"@cite_29",
"@cite_3",
"@cite_39",
"@cite_15",
"@cite_13",
"@cite_25",
"@cite_17"
],
"mid": [
"2141341167",
"2145096794",
"2021567584",
"2114129195",
"2061158652",
"1996692478",
"2112502349",
"",
"2102207765",
"2123629701",
"340244495",
"1906155581"
],
"abstract": [
"We examine the question of reconstruction of signals from periodic nonuniform samples. This involves discarding samples from a uniformly sampled signal in some periodic fashion. We give a characterization of the signals that can be reconstructed at exactly the minimum rate once a nonuniform sampling pattern has been fixed. We give an implicit characterization of the reconstruction system, and a design method by which the ideal reconstruction filters may be approximated. We demonstrate that for certain spectral supports the minimum rate can be approached or achieved using reconstruction schemes of much lower complexity than those arrived at by using spectral slicing, as in earlier work. Previous work on multiband signals have typically been those for which restrictive assumptions on the sizes and positions of the bands have been made, or where the minimum rate was approached asymptotically. We show that the class of multiband signals which can be reconstructed exactly is shown to be far larger than previously considered. When approaching the minimum rate, this freedom allows us, in certain cases to have a far less complex reconstruction system.",
"This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f spl isin C sup N and a randomly chosen set of frequencies spl Omega . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set spl Omega ? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)= spl sigma sub spl tau spl isin T f( spl tau ) spl delta (t- spl tau ) obeying |T| spl les C sub M spl middot (log N) sup -1 spl middot | spl Omega | for some constant C sub M >0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N sup -M ), f can be reconstructed exactly as the solution to the spl lscr sub 1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C sub M which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T| spl middot logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N sup -M ) would in general require a number of frequency samples at least proportional to |T| spl middot logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.",
"Shannon’s sampling theorem is one of the most powerful results in signal analysis. The aim of this overview is to show that one of its roots is a basic paper of de la Vallee Poussin of 1908. The historical development of sampling theory from 1908 to the present, especially the matter dealing with not necessarily band-limited functions (which includes the duration-limited case actually studied in 1908), is sketched. Emphasis is put on the study of error estimates, as well as on the delicate point-wise behavior of sampling sums at discontinuity points of the signal to be reconstructed.",
"We address the problem of reconstructing a multiband signal from its sub-Nyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multi-coset sampling. To date, recovery methods for this sampling strategy ensure perfect reconstruction either when the band locations are known, or under strict restrictions on the possible spectral supports. In this paper, only the number of bands and their widths are assumed without any other limitations on the support. We describe how to choose the parameters of the multi-coset sampling so that a unique multiband signal matches the given samples. To recover the signal, the continuous reconstruction is replaced by a single finite-dimensional problem without the need for discretization. The resulting problem is studied within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate, and provides a first systematic study of compressed sensing in a truly analog setting. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate.",
"We study the problem of optimal sub-Nyquist sampling for perfect reconstruction of multiband signals. The signals are assumed to have a known spectral support spl Fscr that does not tile under translation. Such signals admit perfect reconstruction from periodic nonuniform sampling at rates approaching Landau's (1967) lower bound equal to the measure of spl Fscr . For signals with sparse spl Fscr , this rate can be much smaller than the Nyquist rate. Unfortunately the reduced sampling rates afforded by this scheme can be accompanied by increased error sensitivity. In a previous study, we derived bounds on the error due to mismodeling and sample additive noise. Adopting these bounds as performance measures, we consider the problems of optimizing the reconstruction sections of the system, choosing the optimal base sampling rate, and designing the nonuniform sampling pattern. We find that optimizing these parameters can improve system performance significantly. Furthermore, uniform sampling is optimal for signals with spl Fscr that tiles under translation. For signals with nontiling spl Fscr , which are not amenable to efficient uniform sampling, the results reveal increased error sensitivities with sub-Nyquist sampling. However, these can be controlled by optimal design, demonstrating the potential for practical multifold reductions in sampling rate.",
"",
"We consider the problem of the reconstruction of a continuous-time function f(x) spl isin H from the samples of the responses of m linear shift-invariant systems sampled at 1 m the reconstruction rate. We extend Papoulis' generalized sampling theory in two important respects. First, our class of admissible input signals (typ. H=L sub 2 ) is considerably larger than the subspace of band-limited functions. Second, we use a more general specification of the reconstruction subspace V( spl psi ), so that the output of the system can take the form of a band-limited function, a spline, or a wavelet expansion. Since we have enlarged the class of admissible input functions, we have to give up Shannon and Papoulis' principle of an exact reconstruction. Instead, we seek an approximation f spl isin V( spl psi ) that is consistent in the sense that it produces exactly the same measurements as the input of the system. This leads to a generalization of Papoulis' sampling theorem and a practical reconstruction algorithm that takes the form of a multivariate filter. In particular, we show that the corresponding system acts as a projector from H onto V( spl psi ). We then propose two complementary polyphase and modulation domain interpretations of our solution. The polyphase representation leads to a simple understanding of our reconstruction algorithm in terms of a perfect reconstruction filter bank. The modulation analysis, on the other hand, is useful in providing the connection with Papoulis' earlier results for the band-limited case. Finally, we illustrate the general applicability of our theory by presenting new examples of interlaced and derivative sampling using splines.",
"",
"It is known that a continuous time signal x(i) with Fourier transform X( spl nu ) band-limited to | spl nu |< spl Theta 2 can be reconstructed from its samples x(T sub 0 n) with T sub 0 =2 spl pi spl Theta . In the case that X( spl nu ) consists of two bands and is band-limited to spl nu sub 0 <| spl nu |< spl nu sub 0 + spl Theta 2, successful reconstruction of x(t) from x(T sub 0 n) requires an additional condition on the band positions. When the two bands are not located properly, Kohlenberg showed that we can use two sets of uniform samples, x(2T sub 0 n) and x(2T sub 0 n+d sub 1 ), with average sampling period T sub 0 , to recover x(t). Because two sets of uniform samples are employed, this sampling scheme is called Periodically Nonuniform Sampling of second order [PNS(2)]. In this paper, we show that PNS(2) can be generalized and applied to a wider class. Also, Periodically Nonuniform Sampling of Lth-order [PNS(L)] will be developed and used to recover a broader class of band-limited signal. Further generalizations will be made to the two-dimensional case and discrete time case.",
"Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then low-pass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, real-time performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.",
"Machine generated contents note: 1. Introduction to compressed sensing Mark A. Davenport, Marco F. Duarte, Yonina C. Eldar and Gitta Kutyniok; 2. Second generation sparse modeling: structured and collaborative signal analysis Alexey Castrodad, Ignacio Ramirez, Guillermo Sapiro, Pablo Sprechmann and Guoshen Yu; 3. Xampling: compressed sensing of analog signals Moshe Mishali and Yonina C. Eldar; 4. Sampling at the rate of innovation: theory and applications Jose Antonia Uriguen, Yonina C. Eldar, Pier Luigi Dragotta and Zvika Ben-Haim; 5. Introduction to the non-asymptotic analysis of random matrices Roman Vershynin; 6. Adaptive sensing for sparse recovery Jarvis Haupt and Robert Nowak; 7. Fundamental thresholds in compressed sensing: a high-dimensional geometry approach Weiyu Xu and Babak Hassibi; 8. Greedy algorithms for compressed sensing Thomas Blumensath, Michael E. Davies and Gabriel Rilling; 9. Graphical models concepts in compressed sensing Andrea Montanari; 10. Finding needles in compressed haystacks Robert Calderbank, Sina Jafarpour and Jeremy Kent; 11. Data separation by sparse representations Gitta Kutyniok; 12. Face recognition by sparse representation Arvind Ganesh, Andrew Wagner, Zihan Zhou, Allen Y. Yang, Yi Ma and John Wright.",
"Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier."
]
}
|
1204.6049
|
1992292413
|
This paper investigates the effect of sub-Nyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear time-invariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of right-invertible time-preserving sampling methods which includes irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling grid employed at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while are typi- cally complicated to realize in practice, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain in this scenario, which is in contrast to the benefits obtained from random mixing in spectrum-blind compressive sampling schemes.
|
One interesting fact discovered in @cite_35 is the non-monotonicity of capacity with sampling rate under filter- and modulation-bank sampling, assuming an equal sampling rate per branch for a given number of branches. This indicates that more sophisticated sampling techniques, adaptive to the channel response and the sampling rate, are needed to maximize capacity under sub-Nyquist rate constraints, including both uniform and nonuniform sampling. However, none of the aforementioned work has investigated the question as to which sampling method can best exploit channel structure, thereby maximizing sampled capacity under a given sampling rate constraint. Although several classes of sampling methods were shown in @cite_35 to have closed-form capacity solutions, the capacity limits might not even exist for general sampling methods. This raises the question as to whether there exists a capacity upper bound over a general class of sub-Nyquist sampling systems beyond the classes we discussed in @cite_35 and, if so, when the bound is achievable. That is the question we investigate herein.
|
{
"cite_N": [
"@cite_35"
],
"mid": [
"2075866458"
],
"abstract": [
"We explore two fundamental questions at the intersection of sampling theory and information theory: how channel capacity is affected by sampling below the channel's Nyquist rate, and what sub-Nyquist sampling strategy should be employed to maximize capacity. In particular, we derive the capacity of sampled analog channels for three prevalent sampling strategies: sampling with filtering, sampling with filter banks, and sampling with modulation and filter banks. These sampling mechanisms subsume most nonuniform sampling techniques applied in practice. Our analyses illuminate interesting connections between undersampled channels and multiple-input multiple-output channels. The optimal sampling structures are shown to extract out the frequencies with the highest SNR from each aliased frequency set, while suppressing aliasing and out-of-band noise. We also highlight connections between undersampled channel capacity and minimum mean-squared error (MSE) estimation from sampled data. In particular, we show that the filters maximizing capacity and the ones minimizing MSE are equivalent under both filtering and filter-bank sampling strategies. These results demonstrate the effect upon channel capacity of sub-Nyquist sampling techniques, and characterize the tradeoff between information rate and sampling rate."
]
}
|
1204.5059
|
2951304677
|
We consider the problem of distributed computation of a target function over a multiple-access channel. If the target and channel functions are matched (i.e., compute the same function), significant performance gains can be obtained by jointly designing the computation and communication tasks. However, in most situations there is mismatch between these two functions. In this work, we analyze the impact of this mismatch on the performance gains achievable with joint computation and communication designs over separation-based designs. We show that for most pairs of target and channel functions there is no such gain, and separation of computation and communication is optimal.
|
In most of these works, communication channels are represented as orthogonal point-to-point links. When the channel itself introduces signal interaction, as is the case for a MAC, there can be a benefit from jointly handling the communication and computation tasks as illustrated in @cite_9 . Function computation over MACs has been studied in @cite_12 @cite_8 @cite_17 @cite_7 and references therein.
|
{
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_12",
"@cite_17"
],
"mid": [
"2129119419",
"2152496949",
"2144970857",
"1975099099",
"2140227342"
],
"abstract": [
"The following network computing problem is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the “computing capacity”. The network coding problem for a single-receiver network is a special case of the network computing problem in which all of the source messages must be reproduced at the receiver. For network coding with a single receiver, routing is known to achieve the capacity by achieving the network min-cut upper bound. We extend the definition of min-cut to the network computing problem and show that the min-cut is still an upper bound on the maximum achievable rate and is tight for computing (using coding) any target function in multi-edge tree networks. It is also tight for computing linear target functions in any network. We also study the bound's tightness for different classes of target functions. In particular, we give a lower bound on the computing capacity in terms of the Steiner tree packing number and a different bound for symmetric functions. We also show that for certain networks and target functions, the computing capacity can be less than an arbitrarily small fraction of the min-cut bound.",
"Traditionally, interference is considered harmful. Wireless networks strive to avoid scheduling multiple transmissions at the same time in order to prevent interference. This paper adopts the opposite approach; it encourages strategically picked senders to interfere. Instead of forwarding packets, routers forward the interfering signals. The destination leverages network-level information to cancel the interference and recover the signal destined to it. The result is analog network coding because it mixes signals not bits. So, what if wireless routers forward signals instead of packets? Theoretically, such an approach doubles the capacity of the canonical 2-way relay network. Surprisingly, it is also practical. We implement our design using software radios and show that it achieves significantly higher throughput than both traditional wireless routing and prior work on wireless network coding.",
"Let (U_ i ,V_ i ) _ i=1 ^ n be a source of independent identically distributed (i.i.d.) discrete random variables with joint probability mass function p(u,v) and common part w=f(u)=g(v) in the sense of Witsenhausen, Gacs, and Korner. It is shown that such a source can be sent with arbitrarily small probability of error over a multiple access channel (MAC) X_ 1 X_ 2 , Y,p(y|x_ 1 ,x_ 2 ) , with allowed codes x_ l (u), x_ 2 (v) if there exist probability mass functions p(s), p(x_ 1 |s,u),p(x_ 2 |s,v) , such that H(U|V) H(V|U ) H(U,V|W) H(U,V) where p(s,u,v,x_ 1 ,x_ 2 ,y), Xl, X2, y)=p(s)p(u,v)p(x_ 1 |u,s)p(x_ 2 |v,s)p(y|x_ 1 ,x_ 2 ). lifts region includes the multiple access channel region and the Slepian-Wolf data compression region as special cases.",
"A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). The goal of this paper is to show how the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC) scheme to coordinate transmissions among nodes. In contrast to \"straightforward\" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves for equivalent coding operation. PNC can yield higher capacity than straight-forward network coding when applied to wireless networks. We believe this is a first paper that ventures into EM-wave-based network coding at the physical layer and demonstrates its potential for boosting network capacity. PNC opens up a whole new research area because of its implications and new design requirements for the physical, MAC, and network layers of ad hoc wireless stations. The resolution of the many outstanding but interesting issues in PNC may lead to a revolutionary new paradigm for wireless ad hoc networking.",
"The problem of reliably reconstructing a function of sources over a multiple-access channel (MAC) is considered. It is shown that there is no source-channel separation theorem even when the individual sources are independent. Joint source-channel strategies are developed that are optimal when the structure of the channel probability transition matrix and the function are appropriately matched. Even when the channel and function are mismatched, these computation codes often outperform separation-based strategies. Achievable distortions are given for the distributed refinement of the sum of Gaussian sources over a Gaussian multiple-access channel with a joint source-channel lattice code. Finally, computation codes are used to determine the multicast capacity of finite-field multiple-access networks, thus linking them to network coding."
]
}
|
1204.5059
|
2951304677
|
We consider the problem of distributed computation of a target function over a multiple-access channel. If the target and channel functions are matched (i.e., compute the same function), significant performance gains can be obtained by jointly designing the computation and communication tasks. However, in most situations there is mismatch between these two functions. In this work, we analyze the impact of this mismatch on the performance gains achievable with joint computation and communication designs over separation-based designs. We show that for most pairs of target and channel functions there is no such gain, and separation of computation and communication is optimal.
|
There is some work touching on the aspect of structural mismatch between the target and the channel functions. In @cite_13 , an example was given in which the mismatch between a linear target function with integer coefficients and a linear channel function with real coefficients can significantly reduce efficiency. In @cite_7 , it was conjectured that, for computation of finite-field addition over a real-addition channel, there could be a gap between the cut-set bound and the computation rate. In @cite_18 , mismatched computation when the network performs linear finite-field operations was studied. To the best of our knowledge, a systematic study of channel and computation mismatch is initiated in this work.
|
{
"cite_N": [
"@cite_18",
"@cite_13",
"@cite_7"
],
"mid": [
"1963814785",
"2952459054",
"2129119419"
],
"abstract": [
"We consider multiple non-colocated sources communicating over a network to a common sink. We assume that the network operation is fixed, and its end result is to convey a fixed linear deterministic transformation of the source data to the sink. This linear transformation is known both at the sources and at the sink. We are interested in the problem of function computation over such networks. We design communication protocols that can perform computation without modifying the network operation, by appropriately selecting the codebook that the sources employ to map their measurements to the data they send over the network.",
"We analyze the asymptotic behavior of compute-and-forward relay networks in the regime of high signal-to-noise ratios. We consider a section of such a network consisting of K transmitters and K relays. The aim of the relays is to reliably decode an invertible function of the messages sent by the transmitters. An upper bound on the capacity of this system can be obtained by allowing full cooperation among the transmitters and among the relays, transforming the network into a K times K multiple-input multiple-output (MIMO) channel. The number of degrees of freedom of compute-and-forward is hence at most K. In this paper, we analyze the degrees of freedom achieved by the lattice coding implementation of compute-and-forward proposed recently by Nazer and Gastpar. We show that this lattice implementation achieves at most 2 (1+1 K) 2 degrees of freedom, thus exhibiting a very different asymptotic behavior than the MIMO upper bound. This raises the question if this gap of the lattice implementation to the MIMO upper bound is inherent to compute-and-forward in general. We answer this question in the negative by proposing a novel compute-and-forward implementation achieving K degrees of freedom.",
"The following network computing problem is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the “computing capacity”. The network coding problem for a single-receiver network is a special case of the network computing problem in which all of the source messages must be reproduced at the receiver. For network coding with a single receiver, routing is known to achieve the capacity by achieving the network min-cut upper bound. We extend the definition of min-cut to the network computing problem and show that the min-cut is still an upper bound on the maximum achievable rate and is tight for computing (using coding) any target function in multi-edge tree networks. It is also tight for computing linear target functions in any network. We also study the bound's tightness for different classes of target functions. In particular, we give a lower bound on the computing capacity in terms of the Steiner tree packing number and a different bound for symmetric functions. We also show that for certain networks and target functions, the computing capacity can be less than an arbitrarily small fraction of the min-cut bound."
]
}
|
1204.5229
|
2952522284
|
The selection problem, where one wishes to locate the @math smallest element in an unsorted array of size @math , is one of the basic problems studied in computer science. The main focus of this work is designing algorithms for solving the selection problem in the presence of memory faults. These can happen as the result of cosmic rays, alpha particles, or hardware failures. Specifically, the computational model assumed here is a faulty variant of the RAM model (abbreviated as FRAM), which was introduced by Finocchi and Italiano. In this model, the content of memory cells might get corrupted adversarially during the execution, and the algorithm is given an upper bound @math on the number of corruptions that may occur. The main contribution of this work is a deterministic resilient selection algorithm with optimal O(n) worst-case running time. Interestingly, the running time does not depend on the number of faults, and the algorithm does not need to know @math . The aforementioned resilient selection algorithm can be used to improve the complexity bounds for resilient @math -d trees developed by Gieseke, Moruz and Vahrenhold. Specifically, the time complexity for constructing a @math -d tree is improved from @math to @math . Besides the deterministic algorithm, a randomized resilient selection algorithm is developed, which is simpler than the deterministic one, and has @math expected time complexity and O(1) space complexity (i.e., is in-place). This algorithm is used to develop the first resilient sorting algorithm that is in-place and achieves optimal @math expected running time.
|
In this model, the algorithm can access the data only through a noisy oracle. The algorithm queries the oracle and can possibly get a faulty answer (i.e., a lie). An upper bound on the number of these lies or a probability of a lie is assumed. See, e.g., @cite_10 and @cite_17 . The data itself cannot get corrupted, therefore, in this model, query replication strategies can be exploited, in contrast to the FRAM model.
|
{
"cite_N": [
"@cite_10",
"@cite_17"
],
"mid": [
"2038435918",
"2069363808"
],
"abstract": [
"This paper studies the depth of noisy decision trees in which each node gives the wrong answer with some constant probability. In the noisy Boolean decision tree model, tight bounds are given on the number of queries to input variables required to compute threshold functions, the parity function and symmetric functions. In the noisy comparison tree model, tight bounds are given on the number of noisy comparisons for searching, sorting, selection and merging. The paper also studies parallel selection and sorting with noisy comparisons, giving tight bounds for several problems.",
"Motivated by the problem of searching in the presence of adversarial errors, we consider a version of the game “Twenty Questions” played on the set 0,…,N-1 where the player giving answers may lie in her answers. The questioner is allowed Q questions and the responder may lie in upto [ rQ ] of the answers, for some fixed and previously known fraction r . Under three different models of this game and for two different question classes, we give precise conditions (i.e. tight bounds on r and, in most cases, optimal bounds on Q under which the questioner has a winning strategy in the game."
]
}
|
1204.5229
|
2952522284
|
The selection problem, where one wishes to locate the @math smallest element in an unsorted array of size @math , is one of the basic problems studied in computer science. The main focus of this work is designing algorithms for solving the selection problem in the presence of memory faults. These can happen as the result of cosmic rays, alpha particles, or hardware failures. Specifically, the computational model assumed here is a faulty variant of the RAM model (abbreviated as FRAM), which was introduced by Finocchi and Italiano. In this model, the content of memory cells might get corrupted adversarially during the execution, and the algorithm is given an upper bound @math on the number of corruptions that may occur. The main contribution of this work is a deterministic resilient selection algorithm with optimal O(n) worst-case running time. Interestingly, the running time does not depend on the number of faults, and the algorithm does not need to know @math . The aforementioned resilient selection algorithm can be used to improve the complexity bounds for resilient @math -d trees developed by Gieseke, Moruz and Vahrenhold. Specifically, the time complexity for constructing a @math -d tree is improved from @math to @math . Besides the deterministic algorithm, a randomized resilient selection algorithm is developed, which is simpler than the deterministic one, and has @math expected time complexity and O(1) space complexity (i.e., is in-place). This algorithm is used to develop the first resilient sorting algorithm that is in-place and achieves optimal @math expected running time.
|
Several other noisy computational models have been investigated. Sherstov @cite_28 , showed an optimal (in terms of degree) approximation polynomial that is robust to noise. Gacs and Gal @cite_20 , proved a lower bound on the number of gates in a noise resistant circuit. These works, as well as others, have more computational complexity theory flavour than the FRAM model, and treat different computational models from the FRAM model.
|
{
"cite_N": [
"@cite_28",
"@cite_20"
],
"mid": [
"2059242764",
"2107895738"
],
"abstract": [
"A basic question in any computational model is how to reliably compute a given function when the inputs or intermediate computations are subject to noise at a constant rate. Ideally, one would like to use at most a constant factor more resources compared to the noise-free case. This question has been studied for decision trees, circuits, automata, data structures, broadcast networks, communication protocols, and other models. (2003) posed the noisy computation problem for real polynomials. We give a complete solution to this problem. For any polynomial p: 0,1 n->[-1,1], we construct a polynomial probust:Rn->R of degree O(deg p+log(1 e)) that epsilon-approximates p and is robust to noise in the inputs: |p(x)-probust(x+δ)|",
"It is proved that the reliable computation of any Boolean function with, sensitivity s requires Omega (s log s) gates if the gates of the circuit fail independently with a fixed positive probability. The Omega (s log s) bound holds even if s is the block sensitivity instead of the sensitivity of the Boolean function. Some open problems are mentioned. >"
]
}
|
1204.4253
|
1967221240
|
We consider molecular communication networks consisting of transmitters and receivers distributed in a fluidic medium. In such networks, a transmitter sends one or more signaling molecules, which are diffused over the medium, to the receiver to realize the communication. In order to be able to engineer synthetic molecular communication networks, mathematical models for these networks are required. This paper proposes a new stochastic model for molecular communication networks called reaction-diffusion master equation with exogenous input (RDMEX). The key idea behind RDMEX is to model the transmitters as time series of signaling molecule counts, while diffusion in the medium and chemical reactions at the receivers are modeled as Markov processes using master equation. An advantage of RDMEX is that it can readily be used to model molecular communication networks with multiple transmitters and receivers. For the case where the reaction kinetics at the receivers is linear, we show how RDMEX can be used to determine the mean and covariance of the receiver output signals, and derive closed-form expressions for the mean receiver output signal of the RDMEX model. These closed-form expressions reveal that the output signal of a receiver can be affected by the presence of other receivers. Numerical examples are provided to demonstrate the properties of the model.
|
Molecular communication networks can be divided into two categories, according to whether they are natural or synthetic. Natural molecular communication networks are prevalent in living organisms. Their synthetic counterparts, though still rare, do exist. For example, @cite_29 presents a system with multiple genetically engineered cells that use cell signalling to coordinate their behaviour. The modelling of natural and synthetic molecular communication networks is studied in different disciplines. The former is mainly studied in biophysics and mathematical physiology, while the latter in synthetic biology. There is also a recent interest in the engineering community to study molecular communication networks from a communication theory point of view @cite_4 @cite_27 @cite_31 . This gives rise to a new research area called nano communication networks @cite_26 .
|
{
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_29",
"@cite_27",
"@cite_31"
],
"mid": [
"2116194016",
"2139762024",
"1977624796",
"2059089080",
"2077258873"
],
"abstract": [
"Nanotechnologies promise new solutions for several applications in biomedical, industrial and military fields. At nano-scale, a nano-machine can be considered as the most basic functional unit. Nano-machines are tiny components consisting of an arranged set of molecules, which are able to perform very simple tasks. Nanonetworks. i.e., the interconnection of nano-machines are expected to expand the capabilities of single nano-machines by allowing them to cooperate and share information. Traditional communication technologies are not suitable for nanonetworks mainly due to the size and power consumption of transceivers, receivers and other components. The use of molecules, instead of electromagnetic or acoustic waves, to encode and transmit the information represents a new communication paradigm that demands novel solutions such as molecular transceivers, channel models or protocols for nanonetworks. In this paper, first the state-of-the-art in nano-machines, including architectural aspects, expected features of future nano-machines, and current developments are presented for a better understanding of nanonetwork scenarios. Moreover, nanonetworks features and components are explained and compared with traditional communication networks. Also some interesting and important applications for nanonetworks are highlighted to motivate the communication needs between the nano-machines. Furthermore, nanonetworks for short-range communication based on calcium signaling and molecular motors as well as for long-range communication based on pheromones are explained in detail. Finally, open research challenges, such as the development of network components, molecular communication theory, and the development of new architectures and protocols, are presented which need to be solved in order to pave the way for the development and deployment of nanonetworks within the next couple of decades.",
"Molecular communication is a promising paradigm for nanoscale networks. The end-to-end (including the channel) models developed for classical wireless communication networks need to undergo a profound revision so that they can be applied for nanonetworks. Consequently, there is a need to develop new end-to-end (including the channel) models which can give new insights into the design of these nanoscale networks. The objective of this paper is to introduce a new physical end-to-end (including the channel) model for molecular communication. The new model is investigated by means of three modules, i.e., the transmitter, the signal propagation and the receiver. Each module is related to a specific process involving particle exchanges, namely, particle emission, particle diffusion and particle reception. The particle emission process involves the increase or decrease of the particle concentration rate in the environment according to a modulating input signal. The particle diffusion provides the propagation of particles from the transmitter to the receiver by means of the physics laws underlying particle diffusion in the space. The particle reception process is identified by the sensing of the particle concentration value at the receiver location. Numerical results are provided for three modules, as well as for the overall end-to-end model, in terms of normalized gain and delay as functions of the input frequency and of the transmission range.",
"Pattern formation is a hallmark of coordinated cell behaviour in both single and multicellular organisms. It typically involves cell–cell communication and intracellular signal processing. Here we show a synthetic multicellular system in which genetically engineered ‘receiver’ cells are programmed to form ring-like patterns of differentiation based on chemical gradients of an acyl-homoserine lactone (AHL) signal that is synthesized by ‘sender’ cells. In receiver cells, ‘band-detect’ gene networks respond to user-defined ranges of AHL concentrations. By fusing different fluorescent proteins as outputs of network variants, an initially undifferentiated ‘lawn’ of receivers is engineered to form a bullseye pattern around a sender colony. Other patterns, such as ellipses and clovers, are achieved by placing senders in different configurations. Experimental and theoretical analyses reveal which kinetic parameters most significantly affect ring development over time. Construction and study of such synthetic multicellular systems can improve our quantitative understanding of naturally occurring developmental processes and may foster applications in tissue engineering, biomaterial fabrication and biosensing.",
"Abstract Molecular communication enables nanomachines to exchange information with each other by emitting molecules to their surrounding environment. Molecular nanonetworks are envisioned as a number of nanomachines that are deployed in an environment to share specific molecular information such as odor, flavor, or any chemical state. In this paper, using the stochastic model of molecular reactions in biochemical systems, a realistic channel model is first introduced for molecular communication. Then, based on the realistic channel model, we introduce a deterministic capacity expression for point-to-point, broadcast, and multiple-access molecular channels. We also investigate information flow capacity in a molecular nanonetwork for the realization of efficient communication and networking techniques for frontier nanonetwork applications. The results reveal that molecular point-to-point, broadcast, and multiple-access channels are feasible with a satisfactorily high molecular communication rate, which allows molecular information flow in nanonetworks. Furthermore, the derived molecular channel model with input-dependent noise term also reveals that unlike a traditional Gaussian communication channel, achievable capacity is affected by both lower and upper bounds of the channel input in molecular communication channels.",
"In this paper, we consider molecular communication, with information conveyed in the time of release of molecules. These molecules propagate to the transmitter through a fluid medium, propelled by a positive drift velocity and Brownian motion. The main contribution of this paper is the development of a theoretical foundation for such a communication system; specifically, the additive inverse Gaussian noise (AIGN) channel model. In such a channel, the information is corrupted by noise that follows an IG distribution. We show that such a channel model is appropriate for molecular communication in fluid media. Taking advantage of the available literature on the IG distribution, upper and lower bounds on channel capacity are developed, and a maximum likelihood receiver is derived. Results are presented which suggest that this channel does not have a single quality measure analogous to signal-to-noise ratio in the additive white Gaussian noise channel. It is also shown that the use of multiple molecules leads to reduced error rate in a manner akin to diversity order in wireless communications. Finally, some open problems are discussed that arise from the IG channel model."
]
}
|
1204.4253
|
1967221240
|
We consider molecular communication networks consisting of transmitters and receivers distributed in a fluidic medium. In such networks, a transmitter sends one or more signaling molecules, which are diffused over the medium, to the receiver to realize the communication. In order to be able to engineer synthetic molecular communication networks, mathematical models for these networks are required. This paper proposes a new stochastic model for molecular communication networks called reaction-diffusion master equation with exogenous input (RDMEX). The key idea behind RDMEX is to model the transmitters as time series of signaling molecule counts, while diffusion in the medium and chemical reactions at the receivers are modeled as Markov processes using master equation. An advantage of RDMEX is that it can readily be used to model molecular communication networks with multiple transmitters and receivers. For the case where the reaction kinetics at the receivers is linear, we show how RDMEX can be used to determine the mean and covariance of the receiver output signals, and derive closed-form expressions for the mean receiver output signal of the RDMEX model. These closed-form expressions reveal that the output signal of a receiver can be affected by the presence of other receivers. Numerical examples are provided to demonstrate the properties of the model.
|
Molecular dynamics is commonly used in simulation of molecular communication networks. Many examples of simulators exist, especially for natural molecular communication networks, see @cite_6 for a recent overview. For synthetic networks, a recent example is @cite_16 . By analysing the molecular dynamics of transmitters and receivers, @cite_3 characterises the noise in transmitters and receivers as, respectively, sampling and counting noise.
|
{
"cite_N": [
"@cite_16",
"@cite_3",
"@cite_6"
],
"mid": [
"2017416936",
"2171125625",
"1528643580"
],
"abstract": [
"Abstract A number of nanomachines that cooperatively communicate and share molecular information in order to achieve specific tasks is envisioned as a nanonetwork. Due to the size and capabilities of nanomachines, the traditional communication paradigms cannot be used for nanonetworks in which network nodes may be composed of just several atoms or molecules and scale on the orders of few nanometers. Instead, molecular communication is a promising solution approach for the nanoscale communication paradigm. However, molecular communication must be thoroughly investigated to realize nanoscale communication and nanonetworks for many envisioned applications such as nanoscale body area networks, and nanoscale molecular computers. In this paper, a simulation framework (NanoNS) for molecular nanonetworks is presented. The objective of the framework is to provide a simulation tool in order to create a better understanding of nanonetworks and facilitate the development of new communication techniques and the validation of theoretical results. The NanoNS framework is built on top of core components of a widely used network simulator (ns-2). It incorporates the simulation modules for various nanoscale communication paradigms based on a diffusive molecular communication channel. The details of NanoNS are discussed and some functional scenarios are defined to validate NanoNS. In addition to this, the numerical analyses of these functional scenarios and their experimental results are presented. The validation of NanoNS is shown via comparative evaluation of these experimental and numerical results.",
"Molecular communication (MC) is a promising bio-inspired paradigm, in which molecules are used to encode, transmit and receive information at the nanoscale. Very limited research has addressed the problem of modeling and analyzing the MC in nanonetworks. One of the main challenges in MC is the proper study and characterization of the noise sources. The objective of this paper is the analysis of the noise sources in diffusion-based MC using tools from signal processing, statistics and communication engineering. The reference diffusion-based MC system for this analysis is the physical end-to-end model introduced in a previous work by the same authors. The particle sampling noise and the particle counting noise are analyzed as the most relevant diffusion-based noise sources. The analysis of each noise source results in two types of models, namely, the physical model and the stochastic model. The physical model mathematically expresses the processes underlying the physics of the noise source. The stochastic model captures the noise source behavior through statistical parameters. The physical model results in block schemes, while the stochastic model results in the characterization of the noises using random processes. Simulations are conducted to evaluate the capability of the stochastic model to express the diffusion-based noise sources represented by the physical model.",
"One of the fundamental motivations underlying computational cell biology is to gain insight into the complicated dynamical processes taking place, for example, on the plasma membrane or in the cytosol of a cell. These processes are often so complicated that purely temporal mathematical models cannot adequately capture the complex chemical kinetics and transport processes of, for example, proteins or vesicles. On the other hand, spatial models such as Monte Carlo approaches can have very large computational overheads. This chapter gives an overview of the state of the art in the development of stochastic simulation techniques for the spatial modelling of dynamic processes in a living cell."
]
}
|
1204.4253
|
1967221240
|
We consider molecular communication networks consisting of transmitters and receivers distributed in a fluidic medium. In such networks, a transmitter sends one or more signaling molecules, which are diffused over the medium, to the receiver to realize the communication. In order to be able to engineer synthetic molecular communication networks, mathematical models for these networks are required. This paper proposes a new stochastic model for molecular communication networks called reaction-diffusion master equation with exogenous input (RDMEX). The key idea behind RDMEX is to model the transmitters as time series of signaling molecule counts, while diffusion in the medium and chemical reactions at the receivers are modeled as Markov processes using master equation. An advantage of RDMEX is that it can readily be used to model molecular communication networks with multiple transmitters and receivers. For the case where the reaction kinetics at the receivers is linear, we show how RDMEX can be used to determine the mean and covariance of the receiver output signals, and derive closed-form expressions for the mean receiver output signal of the RDMEX model. These closed-form expressions reveal that the output signal of a receiver can be affected by the presence of other receivers. Numerical examples are provided to demonstrate the properties of the model.
|
There are ample examples in using PDE --- in particular diffusion PDE, telegraph equation and RDPDE --- to model molecular communication. For natural networks, @cite_1 @cite_8 use RDPDE to study the noise in receptor binding in chemotaxis, and @cite_24 uses RDPDE to study signalling cascades. However, these papers do not consider the transmitters. For synthetic networks, telegraph or diffusion PDEs (or their kernels) have been used to characterise the diffusion of signalling molecules in @cite_20 @cite_0 @cite_9 @cite_27 @cite_31 and others. However, these papers do not consider the coupling effect between diffusion and receiver reaction kinetics. In our earlier work in @cite_19 , we use a RDPDE in the form of , as a deterministic model for molecular communication network. The RDPDE in @cite_19 is solved numerically and no analytic solution is provided. In this paper, we derived a RDPDE model for molecular communication and provide an interpretation of the model as the mean receiver output of molecular communication networks. In addition, we present an analytical solution to this RDPDE and show that it can be used to accurately predict mean receiver output in molecular communication networks.
|
{
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_24",
"@cite_0",
"@cite_27",
"@cite_19",
"@cite_31",
"@cite_20"
],
"mid": [
"2050286137",
"2006456190",
"2148043530",
"2093617915",
"2093287389",
"2059089080",
"2044579330",
"2077258873",
"2125863473"
],
"abstract": [
"Chemotactic cells of eukaryotic organisms are able to accurately sense shallow chemical concentration gradients using cell-surface receptors. This sensing ability is remarkable as cells must be able to spatially resolve small fractional differences in the numbers of particles randomly arriving at cell-surface receptors by diffusion. An additional challenge and source of uncertainty is that particles, once bound and released, may rebind the same or a different receptor, which adds to noise without providing any new information about the environment. We recently derived the fundamental physical limits of gradient sensing using a simple spherical-cell model, but not including explicit particle-receptor kinetics. Here, we use a method based on the fluctuation-dissipation theorem (FDT) to calculate the accuracy of gradient sensing by realistic receptors. We derive analytical results for two receptors, as well as two coaxial rings of receptors, e.g. one at each cell pole. For realistic receptors, we find that particle rebinding lowers the accuracy of gradient sensing, in line with our previous results.",
"Abstract Simulation-based and information theoretic models for a diffusion-based short-range molecular communication channel between a nano-transmitter and a nano-receiver are constructed to analyze information rates between channel inputs and outputs when the inputs are independent and identically distributed (i.i.d.). The total number of molecules available for information transfer is assumed to be limited. It is also assumed that there is a maximum tolerable delay bound for the overall information transfer. Information rates are computed via simulation-based methods for different time slot lengths and transmitter–receiver distances. The rates obtained from simulations are then compared to those computed using information theoretic channel models which provide upper bounds for information rates. The results indicate that a 4-input–2-output discrete channel model provides a very good approximation to the nano-communication channel, particularly when the time slot lengths are large and the distance between the transmitter and the receiver is small. It is shown through an extensive set of simulations that the information theoretic channel capacity with i.i.d. inputs can be achieved when an encoder adjusts the relative frequency of binary zeros to be higher (between 50 and 70 for the scenarios considered) than binary ones, where a ‘zero’ corresponds to not releasing and a ‘one’ corresponds to releasing a molecule from the transmitter.",
"Many crucial biological processes operate with surprisingly small numbers of molecules, and there is renewed interest in analyzing the impact of noise associated with these small numbers. Twenty-five years ago, Berg and Purcell showed that bacterial chemotaxis, where a single-celled organism must respond to small changes in concentration of chemicals outside the cell, is limited directly by molecule counting noise and that aspects of the bacteria's behavioral and computational strategies must be chosen to minimize the effects of this noise. Here, we revisit and generalize their arguments to estimate the physical limits to signaling processes within the cell and argue that recent experiments are consistent with performance approaching these limits.",
"The temporal and stationary behavior of protein modification cascades has been extensively studied, yet little is known about the spatial aspects of signal propagation. We have previously shown that the spatial separation of opposing enzymes, such as a kinase and a phosphatase, creates signaling activity gradients. Here we show under what conditions signals stall in the space or robustly propagate through spatially distributed signaling cascades. Robust signal propagation results in activity gradients with long plateaus, which abruptly decay at successive spatial locations. We derive an approximate analytical solution that relates the maximal amplitude and propagation length of each activation profile with the cascade level, protein diffusivity, and the ratio of the opposing enzyme activities. The control of the spatial signal propagation appears to be very different from the control of transient temporal responses for spatially homogenous cascades. For spatially distributed cascades where activating and deactivating enzymes operate far from saturation, the ratio of the opposing enzyme activities is shown to be a key parameter controlling signal propagation. The signaling gradients characteristic for robust signal propagation exemplify a pattern formation mechanism that generates precise spatial guidance for multiple cellular processes and conveys information about the cell size to the nucleus.",
"Abstract In this study, nanoscale communication networks have been investigated in the context of binary concentration-encoded unicast molecular communication suitable for numerous emerging applications, for example in healthcare and nanobiomedicine. The main focus of the paper has been given to the spatiotemporal distribution of signal strength and modulation schemes suitable for short-range, medium-range, and long-range molecular communication between two communicating nanomachines in a nanonetwork. This paper has principally focused on bio-inspired transmission techniques for concentration-encoded molecular communication systems. Spatiotemporal distributions of a carrier signal in the form of the concentration of diffused molecules over the molecular propagation channel and diffusion-dependent communication ranges have been explained for various scenarios. Finally, the performance analysis of modulation schemes has been evaluated in the form of the steady-state loss of amplitude of the received concentration signals and its dependence on the transmitter–receiver distance.",
"Abstract Molecular communication enables nanomachines to exchange information with each other by emitting molecules to their surrounding environment. Molecular nanonetworks are envisioned as a number of nanomachines that are deployed in an environment to share specific molecular information such as odor, flavor, or any chemical state. In this paper, using the stochastic model of molecular reactions in biochemical systems, a realistic channel model is first introduced for molecular communication. Then, based on the realistic channel model, we introduce a deterministic capacity expression for point-to-point, broadcast, and multiple-access molecular channels. We also investigate information flow capacity in a molecular nanonetwork for the realization of efficient communication and networking techniques for frontier nanonetwork applications. The results reveal that molecular point-to-point, broadcast, and multiple-access channels are feasible with a satisfactorily high molecular communication rate, which allows molecular information flow in nanonetworks. Furthermore, the derived molecular channel model with input-dependent noise term also reveals that unlike a traditional Gaussian communication channel, achievable capacity is affected by both lower and upper bounds of the channel input in molecular communication channels.",
"Abstract A key research question in the design of molecular nano-communication networks is how the information is to be encoded and decoded. One particular encoding method is to use different frequencies to represent different symbols. This paper will investigate the decoding of such frequency coded signals. To the best of our knowledge, the current literature on molecular communication has only used simple ligand–receptor models as decoders and the decoding of frequency coded signals has not been studied. There are two key issues in the design of such decoders. First, the decoder must exhibit frequency selective behaviour which means that encoder symbol of a specific frequency causes a bigger response at the decoder than symbols of other frequencies. Second, the decoder must take into account inter-symbol interference which earlier studies on concentration coding have pointed out to be a major performance issue. In order to study the design of decoder, we propose a system of reaction–diffusion and reaction kinetic equations to model the system of encoder, channel and decoder. We use this model to show that enzymatic circuit of a particular inter-connection has frequency selective properties. We also explore how decoder can be designed to avoid inter-symbol interference.",
"In this paper, we consider molecular communication, with information conveyed in the time of release of molecules. These molecules propagate to the transmitter through a fluid medium, propelled by a positive drift velocity and Brownian motion. The main contribution of this paper is the development of a theoretical foundation for such a communication system; specifically, the additive inverse Gaussian noise (AIGN) channel model. In such a channel, the information is corrupted by noise that follows an IG distribution. We show that such a channel model is appropriate for molecular communication in fluid media. Taking advantage of the available literature on the IG distribution, upper and lower bounds on channel capacity are developed, and a maximum likelihood receiver is derived. Results are presented which suggest that this channel does not have a single quality measure analogous to signal-to-noise ratio in the additive white Gaussian noise channel. It is also shown that the use of multiple molecules leads to reduced error rate in a manner akin to diversity order in wireless communications. Finally, some open problems are discussed that arise from the IG channel model.",
"Molecular communication (MC) will enable the exchange of information among nanoscale devices. In this novel bio-inspired communication paradigm, molecules are employed to encode, transmit and receive information. In the most general case, these molecules are propagated in the medium by means of free diffusion. An information theoretical analysis of diffusion-based MC is required to better understand the potential of this novel communication mechanism. The study and the modeling of the noise sources is of utmost importance for this analysis. The objective of this paper is to provide a mathematical study of the noise at the reception of the molecular information in a diffusion-based MC system when the ligand-binding reception is employed. The reference diffusion-based MC system for this analysis is the physical end-to-end model introduced in a previous work by the same authors, where the reception process is realized through ligand-binding chemical receptors. The reception noise is modeled in this paper by following two different approaches, namely, through the ligand-receptor kinetics and through the stochastic chemical kinetics. The ligand-receptor kinetics allows to simulate the random perturbations in the chemical processes of the reception, while the stochastic chemical kinetics provides the tools to derive a closed-form solution to the modeling of the reception noise. The ligand-receptor kinetics model is expressed through a block scheme, while the stochastic chemical kinetics results in the characterization of the reception noise using stochastic differential equations. Numerical results are provided to demonstrate that the analytical formulation of the reception noise in terms of stochastic chemical kinetics is compliant with the reception noise behavior resulting from the ligand-receptor kinetics simulations."
]
}
|
1204.4253
|
1967221240
|
We consider molecular communication networks consisting of transmitters and receivers distributed in a fluidic medium. In such networks, a transmitter sends one or more signaling molecules, which are diffused over the medium, to the receiver to realize the communication. In order to be able to engineer synthetic molecular communication networks, mathematical models for these networks are required. This paper proposes a new stochastic model for molecular communication networks called reaction-diffusion master equation with exogenous input (RDMEX). The key idea behind RDMEX is to model the transmitters as time series of signaling molecule counts, while diffusion in the medium and chemical reactions at the receivers are modeled as Markov processes using master equation. An advantage of RDMEX is that it can readily be used to model molecular communication networks with multiple transmitters and receivers. For the case where the reaction kinetics at the receivers is linear, we show how RDMEX can be used to determine the mean and covariance of the receiver output signals, and derive closed-form expressions for the mean receiver output signal of the RDMEX model. These closed-form expressions reveal that the output signal of a receiver can be affected by the presence of other receivers. Numerical examples are provided to demonstrate the properties of the model.
|
For some time, RDME has been considered to be a phenomenological model because it diverges in certain cases @cite_11 . Fortunately, the problem has been resolved in @cite_7 and there is now a firm theoretic basis for RDME. There are many examples of work that use RDME to model natural molecular communication networks, see @cite_5 @cite_14 . However, these papers do not consider the transmitters. The use of RDME in studying synthetic molecular communication networks appear to be novel. To the best of our knowledge, our RDMEX model, which is formed by coupling time sequences of signalling molecule emission pattern with RDME, has not been proposed before. The proposed RDMEX model is one of the novel contributions of this paper.
|
{
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_7",
"@cite_11"
],
"mid": [
"1981132987",
"1982903250",
"2022178479",
"2155021187"
],
"abstract": [
"A method is developed for incorporating diffusion of chemicals in complex geometries into stochastic chemical kinetics simulations. Systems are modeled using the reaction-diffusion master equation, with jump rates for diffusive motion between mesh cells calculated from the discretization weights of an embedded boundary method. Since diffusive jumps between cells are treated as first order reactions, individual realizations of the stochastic process can be created by the Gillespie method. Numerical convergence results for the underlying embedded boundary method, and for the stochastic reaction-diffusion method, are presented in two dimensions. A two-dimensional model of transcription, translation, and nuclear membrane transport in eukaryotic cells is presented to demonstrate the feasibility of the method in studying cell-wide biological processes.",
"Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases",
"Quantitative analysis of biochemical networks often requires consideration of both spatial and stochastic aspects of chemical processes. Despite significant progress in the field, it is still computationally prohibitive to simulate systems involving many reactants or complex geometries using a microscopic framework that includes the finest length and time scales of diffusion-limited molecular interactions. For this reason, spatially or temporally discretized simulations schemes are commonly used when modeling intracellular reaction networks. The challenge in defining such coarse-grained models is to calculate the correct probabilities of reaction given the microscopic parameters and the uncertainty in the molecular positions introduced by the spatial or temporal discretization. In this paper we have solved this problem for the spatially discretized Reaction-Diffusion Master Equation; this enables a seamless and physically consistent transition from the microscopic to the macroscopic frameworks of reaction-diffusion kinetics. We exemplify the use of the methods by showing that a phosphorylation-dephosphorylation motif, commonly observed in eukaryotic signaling pathways, is predicted to display fluctuations that depend on the geometry of the system.",
"The reaction–diffusion master equation (RDME) is a model for chemical systems in which both noise in the chemical reaction process and the diffusion of molecules are important. It extends the chemical master equation for well-mixed chemical reactions by discretizing space into a collection of voxels. In this work, we show how the RDME may be rewritten as an equivalent 'particle tracking' model following the motion and interaction of individual molecules on a lattice. This new representation can be interpreted as a discrete version of the spatially-continuous 'probability distribution function' stochastic reaction–diffusion model studied by Doi. We show how this new representation can be mapped to a quantum field theory, complementing the existing work by Peliti mapping the RDME, and Doi mapping his spatially continuous model, to quantum field theories. The formal continuum limit, as the voxel size approaches zero, of the 'particle tracking' representation is studied to consider the question of whether the RDME approximates any spatially continuous model."
]
}
|
1204.4560
|
2949162286
|
The placement of wind turbines on a given area of land such that the wind farm produces a maximum amount of energy is a challenging optimization problem. In this article, we tackle this problem, taking into account wake effects that are produced by the different turbines on the wind farm. We significantly improve upon existing results for the minimization of wake effects by developing a new problem-specific local search algorithm. One key step in the speed-up of our algorithm is the reduction in computation time needed to assess a given wind farm layout compared to previous approaches. Our new method allows the optimization of large real-world scenarios within a single night on a standard computer, whereas weeks on specialized computing servers were required for previous approaches.
|
The optimal siting of wind turbines on a given area of land is a complex optimization problem which is hard to solve by exact methods. The decision space is non-linear with respect to how sited turbines interact, when considering wake loss and energy capture. Several bio-inspired computation techniques such as evolutionary algorithms @cite_15 @cite_12 and particle swarm optimization @cite_6 have been used for the optimization.
|
{
"cite_N": [
"@cite_15",
"@cite_6",
"@cite_12"
],
"mid": [
"1512383952",
"",
"1639032689"
],
"abstract": [
"The overall structure of this new edition is three-tier: Part I presents the basics, Part II is concerned with methodological issues, and Part III discusses advanced topics. In the second edition the authors have reorganized the material to focus on problems, how to represent them, and then how to choose and design algorithms for different representations. They also added a chapter on problems, reflecting the overall book focus on problem-solvers, a chapter on parameter tuning, which they combined with the parameter control and \"how-to\" chapters into a methodological part, and finally a chapter on evolutionary robotics with an outlook on possible exciting developments in this field. The book is suitable for undergraduate and graduate courses in artificial intelligence and computational intelligence, and for self-study by practitioners and researchers engaged with all aspects of bioinspired design and optimization.",
"",
"From the Publisher: This book brings together - in an informal and tutorial fashion - the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields. Major concepts are illustrated with running examples, and major algorithms are illustrated by Pascal computer programs. No prior knowledge of GAs or genetics is assumed, and only a minimum of computer programming and mathematics background is required."
]
}
|
1204.4347
|
1791923622
|
We present abstraction techniques that transform a given non-linear dynamical system into a linear system or an algebraic system described by polynomials of bounded degree, such that, invariant properties of the resulting abstraction can be used to infer invariants for the original system. The abstraction techniques rely on a change-of-basis transformation that associates each state variable of the abstract system with a function involving the state variables of the original system. We present conditions under which a given change of basis transformation for a non-linear system can define an abstraction. Furthermore, the techniques developed here apply to continuous systems defined by Ordinary Differential Equations (ODEs), discrete systems defined by transition systems and hybrid systems that combine continuous as well as discrete subsystems. The techniques presented here allow us to discover, given a non-linear system, if a change of bases transformation involving degree-bounded polynomials yielding an algebraic abstraction exists. If so, our technique yields the resulting abstract system, as well. This approach is further extended to search for a change of bases transformation that abstracts a given non-linear system into a system of linear differential inclusions. Our techniques enable the use of analysis techniques for linear systems to infer invariants for non-linear systems. We present preliminary evidence of the practical feasibility of our ideas using a prototype implementation.
|
Many different types of have been studied for hybrid systems @cite_12 including predicate abstraction @cite_30 and abstractions based on invariants @cite_17 . The use of counter-example guided iterative abstraction-refinement has also been investigated in the past (Cf. @cite_18 and @cite_9 , for example). In this paper, we consider continuous abstractions for continuous systems specified as ODEs, discrete systems and hybrid systems using a change of bases transformation. As noted above, not all transformations can be used for this purpose. Our abstractions for ODEs bear similarities to the notion of topological semi-conjugacy between flows of dynamical systems @cite_13 .
|
{
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_9",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2060369194",
"1793594870",
"2127574686",
"226431050",
"2117218135",
"2138239212"
],
"abstract": [
"We present a procedure for constructing sound finite-state discrete abstractions of hybrid systems. This procedure uses ideas from predicate abstraction to abstract the discrete dynamics and qualitative reasoning to abstract the continuous dynamics of the hybrid system. It relies on the ability to decide satisfiability of quantifier-free formulas in some theory rich enough to encode the hybrid system. We characterize the sets of predicates that can be used to create high quality abstractions and we present new approaches to discover such useful sets of predicates. Under certain assumptions, the abstraction procedure can be applied compositionally to abstract a hybrid system described as a composition of two hybrid automata. We show that the constructed abstractions are always sound, but are relatively complete only under certain assumptions.",
"Predicate abstraction has emerged to be a powerful technique for extracting finite-state models from infinite-state systems, and has been recently shown to enhance the effectiveness of the reachability computation techniques for hybrid systems. Given a hybrid system with linear dynamics and a set of linear predicates, the verifier performs an on-the-fly search of the finite discrete quotient whose states correspond to the truth assignments to the input predicates. The success of this approach depends on the choice of the predicates used for abstraction. In this paper, we focus on identifying these predicates automatically by analyzing spurious counter-examples generated by the search in the abstract state-space. We present the basic techniques for discovering new predicates that will rule out closely related spurious counter-examples, optimizations of these techniques, implementation of these in the verification tool, and case studies demonstrating the promise of the approach.",
"The state explosion problem remains a major hurdle in applying symbolic model checking to large hardware designs. State space abstraction, having been essential for verifying designs of industrial complexity, is typically a manual process, requiring considerable creativity and insight.In this article, we present an automatic iterative abstraction-refinement methodology that extends symbolic model checking. In our method, the initial abstract model is generated by an automatic analysis of the control structures in the program to be verified. Abstract models may admit erroneous (or \"spurious\") counterexamples. We devise new symbolic techniques that analyze such counterexamples and refine the abstract model correspondingly. We describe aSMV, a prototype implementation of our methodology in NuSMV. Practical experiments including a large Fujitsu IP core design with about 500 latches and 10000 lines of SMV code confirm the effectiveness of our approach.",
"Preface List of figures List of tables 1. Introduction 2. Linear systems 3. Existence and uniqueness 4. Dynamical systems 5. Invariant manifolds 6. The phase plane 7. Chaotic dynamics 8. Bifurcation theory 9. Hamiltonian dynamics A. Mathematical software Bibliography Index.",
"A hybrid system is a dynamical system with both discrete and continuous state changes. For analysis purposes, it is often useful to abstract a system in a way that preserves the properties being analysed while hiding the details that are of no interest. We show that interesting classes of hybrid systems can be abstracted to purely discrete systems while preserving all properties that are definable in temporal logic. The classes that permit discrete abstractions fall into two categories. Either the continuous dynamics must be restricted, as is the case for timed and rectangular hybrid systems, or the discrete dynamics must be restricted, as is the case for o-minimal hybrid systems. In this paper, we survey and unify results from both areas.",
"Hybrid systems combine discrete state dynamics which model mode switching, with continuous state dynamics which model physical processes. Hybrid systems can be controlled by affecting both their discrete mode logic and continuous dynamics: in many systems, such as commercial aircraft, these can be controlled both automatically and using manual control. A human interacting with a hybrid system is often presented, through information displays, with a simplified representation of the underlying system. This user interface should not overwhelm the human with unnecessary information, and thus usually contains only a subset of information about the true system model, yet, if properly designed, represents an abstraction of the true system which the human is able to use to safely interact with the system. In safety-critical systems, correct and succinct interfaces are paramount: interfaces must provide adequate information and must not confuse the user. We present an invariance-preserving abstraction which generates a discrete event system that can be used to analyze, verify, or design user-interfaces for hybrid human-automation systems. This abstraction is based on hybrid system reachability analysis, in which, through the use of a recently developed computational tool, we find controlled invariant regions satisfying a priori safety constraints for each mode, and the controller that must be applied on the boundaries of the computed sets to render the sets invariant. By assigning a discrete state to each computed invariant set, we create a discrete event system representation which reflects the safety properties of the hybrid system. This abstraction, along with the formulation of an interface model as a discrete event system, allows the use of discrete techniques for interface analysis, including existing interface verification and design methods. We apply the abstraction method to two examples: a car traveling through a yellow light at an intersection, and an aircraft autopilot in a landing go-around maneuver."
]
}
|
1204.4347
|
1791923622
|
We present abstraction techniques that transform a given non-linear dynamical system into a linear system or an algebraic system described by polynomials of bounded degree, such that, invariant properties of the resulting abstraction can be used to infer invariants for the original system. The abstraction techniques rely on a change-of-basis transformation that associates each state variable of the abstract system with a function involving the state variables of the original system. We present conditions under which a given change of basis transformation for a non-linear system can define an abstraction. Furthermore, the techniques developed here apply to continuous systems defined by Ordinary Differential Equations (ODEs), discrete systems defined by transition systems and hybrid systems that combine continuous as well as discrete subsystems. The techniques presented here allow us to discover, given a non-linear system, if a change of bases transformation involving degree-bounded polynomials yielding an algebraic abstraction exists. If so, our technique yields the resulting abstract system, as well. This approach is further extended to search for a change of bases transformation that abstracts a given non-linear system into a system of linear differential inclusions. Our techniques enable the use of analysis techniques for linear systems to infer invariants for non-linear systems. We present preliminary evidence of the practical feasibility of our ideas using a prototype implementation.
|
Fixed point techniques for deriving invariants of differential equations have been proposed by the author in previous papers @cite_37 @cite_41 These techniques have addressed the derivation of polyhedral invariants for affine systems @cite_37 and algebraic invariants for systems with polynomial right-hand sides @cite_41 . In this technique, we employ the machinery of fixed-points. Our primary goal is not to derive invariants, per se, but to search for abstractions of non-linear systems into linear systems.
|
{
"cite_N": [
"@cite_41",
"@cite_37"
],
"mid": [
"2137258051",
"2477967262"
],
"abstract": [
"We present computational techniques for automatically generating algebraic (polynomial equality) invariants for algebraic hybrid systems. Such systems involve ordinary differential equations with multivariate polynomial right-hand sides. Our approach casts the problem of generating invariants for differential equations as the greatest fixed point of a monotone operator over the lattice of ideals in a polynomial ring. We provide an algorithm to compute this monotone operator using basic ideas from commutative algebraic geometry. However, the resulting iteration sequence does not always converge to a fixed point, since the lattice of ideals over a polynomial ring does not satisfy the descending chain condition. We then present a bounded-degree relaxation based on the concept of \"pseudo ideals\", due to Colon, that restricts ideal membership using multipliers with bounded degrees. We show that the monotone operator on bounded degree pseudo ideals is convergent and generates fixed points that can be used to generate useful algebraic invariants for non-linear systems. The technique for continuous systems is then extended to consider hybrid systems with multiple modes and discrete transitions between modes. We have implemented the exact, non-convergent iteration over ideals in combination with the bounded degree iteration over pseudo ideals to guarantee convergence. This has been applied to automatically infer useful and interesting polynomial invariants for some benchmark non-linear systems.",
"We investigate techniques for automatically generating symbolic approximations to the time solution of a system of differential equations. This is an important primitive operation for the safety analysis of continuous and hybrid systems. In this paper we design a time elapse operator that computes a symbolic over-approximation of time solutions to a continuous system starting from a given initial region. Our approach is iterative over the cone of functions (drawn from a suitable universe) that are non negative over the initial region. At each stage, we iteratively remove functions from the cone whose Lie derivatives do not lie inside the current iterate. If the iteration converges, the set of states defined by the final iterate is shown to contain all the time successors of the initial region. The convergence of the iteration can be forced using abstract interpretation operations such as widening and narrowing. We instantiate our technique to linear hybrid systems with piecewise-affine dynamics to compute polyhedral approximations to the time successors. Using our prototype implementation TIMEPASS, we demonstrate the performance of our technique on benchmark examples."
]
}
|
1204.4347
|
1791923622
|
We present abstraction techniques that transform a given non-linear dynamical system into a linear system or an algebraic system described by polynomials of bounded degree, such that, invariant properties of the resulting abstraction can be used to infer invariants for the original system. The abstraction techniques rely on a change-of-basis transformation that associates each state variable of the abstract system with a function involving the state variables of the original system. We present conditions under which a given change of basis transformation for a non-linear system can define an abstraction. Furthermore, the techniques developed here apply to continuous systems defined by Ordinary Differential Equations (ODEs), discrete systems defined by transition systems and hybrid systems that combine continuous as well as discrete subsystems. The techniques presented here allow us to discover, given a non-linear system, if a change of bases transformation involving degree-bounded polynomials yielding an algebraic abstraction exists. If so, our technique yields the resulting abstract system, as well. This approach is further extended to search for a change of bases transformation that abstracts a given non-linear system into a system of linear differential inclusions. Our techniques enable the use of analysis techniques for linear systems to infer invariants for non-linear systems. We present preliminary evidence of the practical feasibility of our ideas using a prototype implementation.
|
Finally, our approach is closely related to that can be used to linearize a given differential equation with polynomial right-hand sides @cite_24 . The standard Carlemann embedding technique creates an infinite dimensional linear system, wherein, each dimension corresponds to a monomial or a basis polynomial. In practice, it is possible to create a linear approximation with known error bounds by truncating the monomial terms beyond a degree cutoff. Our approach for differential equation abstractions can be seen as a search for a finite submatrix'' inside the infinite matrix created by the Carleman linearization. The rows and columns of this submatrix correspond to monomials such that the derivative of each monomial in the submatrix is a linear combination of monomials that belong the submatrix. Note, however, that while Carleman embedding is defined using some basis for polynomials (usually power-products), our approach can derive transformations that may involve polynomials as opposed to just power-products.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"1520892678"
],
"abstract": [
"The Carleman linearization has become a new powerful tool in the study of nonlinear dynamical systems. Nevertheless, there is the general lack of familiarity with the Carleman embedding technique among those working in the field of nonlinear models. This book provides a systematic presentation of the Carleman linearization, its generalizations and applications. It also includes a review of existing alternative methods for linearization of nonlinear dynamical systems. There are probably no books covering such a wide spectrum of linearization algorithms. This book also gives a comprehensive introduction to the Kronecker product of matrices, whereas most books deal with it only superficially. The Kronecker product of matrices plays an important role in mathematics and in applications found in theoretical physics."
]
}
|
1204.4347
|
1791923622
|
We present abstraction techniques that transform a given non-linear dynamical system into a linear system or an algebraic system described by polynomials of bounded degree, such that, invariant properties of the resulting abstraction can be used to infer invariants for the original system. The abstraction techniques rely on a change-of-basis transformation that associates each state variable of the abstract system with a function involving the state variables of the original system. We present conditions under which a given change of basis transformation for a non-linear system can define an abstraction. Furthermore, the techniques developed here apply to continuous systems defined by Ordinary Differential Equations (ODEs), discrete systems defined by transition systems and hybrid systems that combine continuous as well as discrete subsystems. The techniques presented here allow us to discover, given a non-linear system, if a change of bases transformation involving degree-bounded polynomials yielding an algebraic abstraction exists. If so, our technique yields the resulting abstract system, as well. This approach is further extended to search for a change of bases transformation that abstracts a given non-linear system into a system of linear differential inclusions. Our techniques enable the use of analysis techniques for linear systems to infer invariants for non-linear systems. We present preliminary evidence of the practical feasibility of our ideas using a prototype implementation.
|
The rest of this paper presents our approach for Ordinary Differential Equations in . The ideas for discrete systems are presented in by first presenting the theory for simple loops and then extending it to arbitrary discrete programs modeled by transition systems. The extensions to hybrid systems are presented briefly by suitably merging the techniques for discrete programs with those for ODEs. Finally, presents an evaluation of the ideas presented using our implementation that combines an automatic search for CoB transformations with polyhedral invariant generation for continuous, discrete and hybrid systems @cite_32 @cite_5 @cite_37 .
|
{
"cite_N": [
"@cite_5",
"@cite_37",
"@cite_32"
],
"mid": [
"1502028089",
"2477967262",
""
],
"abstract": [
"Linear Relation Analysis [11] is an abstract interpretation devoted to the automatic discovery of invariant linear inequalities among numerical variables of a program. In this paper, we apply such an analysis to the verification of quantitative time properties of two kinds of systems: synchronous programs and linear hybrid systems.",
"We investigate techniques for automatically generating symbolic approximations to the time solution of a system of differential equations. This is an important primitive operation for the safety analysis of continuous and hybrid systems. In this paper we design a time elapse operator that computes a symbolic over-approximation of time solutions to a continuous system starting from a given initial region. Our approach is iterative over the cone of functions (drawn from a suitable universe) that are non negative over the initial region. At each stage, we iteratively remove functions from the cone whose Lie derivatives do not lie inside the current iterate. If the iteration converges, the set of states defined by the final iterate is shown to contain all the time successors of the initial region. The convergence of the iteration can be forced using abstract interpretation operations such as widening and narrowing. We instantiate our technique to linear hybrid systems with piecewise-affine dynamics to compute polyhedral approximations to the time successors. Using our prototype implementation TIMEPASS, we demonstrate the performance of our technique on benchmark examples.",
""
]
}
|
1204.3293
|
2155545529
|
Determining whether an unordered collection of overlapping substrings (called shingles) can be uniquely decoded into a consistent string is a problem that lies within the foundation of a broad assortment of disciplines ranging from networking and information theory through cryptography and even genetic engineering and linguistics. We present three perspectives on this problem: a graph theoretic framework due to Pevzner, an automata theoretic approach from our previous work, and a new insight that yields a time-optimal streaming algorithm for determining whether a string of @math characters over the alphabet @math can be uniquely decoded from its two-character shingles. Our algorithm achieves an overall time complexity @math and space complexity @math . As an application, we demonstrate how this algorithm can be extended to larger shingles for efficient string reconciliation.
|
It was shown in @cite_2 that the collection of strings having a unique reconstruction from the shingles representation is a regular language. Following up, Li and Xie @cite_18 gave an explicit construction of a deterministic finite-state automaton (DFA) recognizing this language. Our work in @cite_21 has demonstrated that there is no DFA of subexponential size for recognizing this language, and instead we have exhibited an equivalent NFA with @math states.
|
{
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_2"
],
"mid": [
"2041411237",
"",
"1977458360"
],
"abstract": [
"Symbolic sequences uniquely reconstructible from all their substrings of length k compose a regular factorial language. We thoroughly characterize this language by its minimal forbidden words, and explicitly build up a deterministic finite automaton that accepts it. This provides an efficient on-line algorithm for testing the unique reconstructibility of the sequences.",
"",
"We define the family of n-gram embeddings from strings over a finite alphabet into the semimodule NK. We classify all ξ ∈ NK that are valid images of strings under such embeddings, as well as all ξ whose inverse image consists of exactly 1 string (we call such ξ uniquely decodable). We prove that for a fixed alphabet, the set of all strings whose image is uniquely decodable is a regular language."
]
}
|
1204.3293
|
2155545529
|
Determining whether an unordered collection of overlapping substrings (called shingles) can be uniquely decoded into a consistent string is a problem that lies within the foundation of a broad assortment of disciplines ranging from networking and information theory through cryptography and even genetic engineering and linguistics. We present three perspectives on this problem: a graph theoretic framework due to Pevzner, an automata theoretic approach from our previous work, and a new insight that yields a time-optimal streaming algorithm for determining whether a string of @math characters over the alphabet @math can be uniquely decoded from its two-character shingles. Our algorithm achieves an overall time complexity @math and space complexity @math . As an application, we demonstrate how this algorithm can be extended to larger shingles for efficient string reconciliation.
|
The problem of determining the minimum number of edits (insertions or deletions) required to transform one string into another has a long history in the literature @cite_23 @cite_30 . Orlitsky @cite_5 shows that the amount of communication @math to reconcile two strings @math and @math (of lengths @math and @math respectively) that are known to be at most @math -edits apart is at most [ C_ (x,y) f(y) + 3 f(y) + + 13, ] for [ ( |y|+ ) f(y) ( |y|+ ) + 3 ( ), ] although he leaves an efficient one-way protocol as an open question.
|
{
"cite_N": [
"@cite_30",
"@cite_5",
"@cite_23"
],
"mid": [
"1990061958",
"",
"2752885492"
],
"abstract": [
"Part I. Exact String Matching: The Fundamental String Problem: 1. Exact matching: fundamental preprocessing and first algorithms 2. Exact matching: classical comparison-based methods 3. Exact matching: a deeper look at classical methods 4. Semi-numerical string matching Part II. Suffix Trees and their Uses: 5. Introduction to suffix trees 6. Linear time construction of suffix trees 7. First applications of suffix trees 8. Constant time lowest common ancestor retrieval 9. More applications of suffix trees Part III. Inexact Matching, Sequence Alignment and Dynamic Programming: 10. The importance of (sub)sequence comparison in molecular biology 11. Core string edits, alignments and dynamic programming 12. Refining core string edits and alignments 13. Extending the core problems 14. Multiple string comparison: the Holy Grail 15. Sequence database and their uses: the motherlode Part IV. Currents, Cousins and Cameos: 16. Maps, mapping, sequencing and superstrings 17. Strings and evolutionary trees 18. Three short topics 19. Models of genome-level mutations.",
"",
"From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25 increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning."
]
}
|
1204.3293
|
2155545529
|
Determining whether an unordered collection of overlapping substrings (called shingles) can be uniquely decoded into a consistent string is a problem that lies within the foundation of a broad assortment of disciplines ranging from networking and information theory through cryptography and even genetic engineering and linguistics. We present three perspectives on this problem: a graph theoretic framework due to Pevzner, an automata theoretic approach from our previous work, and a new insight that yields a time-optimal streaming algorithm for determining whether a string of @math characters over the alphabet @math can be uniquely decoded from its two-character shingles. Our algorithm achieves an overall time complexity @math and space complexity @math . As an application, we demonstrate how this algorithm can be extended to larger shingles for efficient string reconciliation.
|
The problem of set reconciliation seeks to reconcile two remote sets @math and @math of @math -bit integers using minimum communication. The approach in @cite_0 involves translating the set elements into an equivalent , so that the problem of set reconciliation is reduced to an equivalent problem of rational function interpolation, much like in Reed-Solomon decoding @cite_15 .
|
{
"cite_N": [
"@cite_0",
"@cite_15"
],
"mid": [
"2142423295",
"1606480398"
],
"abstract": [
"We consider the problem of efficiently reconciling two similar sets held by different hosts while minimizing the communication complexity. This type of problem arises naturally from gossip protocols used for the distribution of information, but has other applications as well. We describe an approach to such reconciliation based on the encoding of sets as polynomials. The resulting protocols exhibit tractable computational complexity and nearly optimal communication complexity. Moreover, these protocols can be adapted to work over a broadcast channel, allowing many clients to reconcile with one host based on a single broadcast.",
"Linear Codes. Nonlinear Codes, Hadamard Matrices, Designs and the Golay Code. An Introduction to BCH Codes and Finite Fields. Finite Fields. Dual Codes and Their Weight Distribution. Codes, Designs and Perfect Codes. Cyclic Codes. Cyclic Codes: Idempotents and Mattson-Solomon Polynomials. BCH Codes. Reed-Solomon and Justesen Codes. MDS Codes. Alternant, Goppa and Other Generalized BCH Codes. Reed-Muller Codes. First-Order Reed-Muller Codes. Second-Order Reed-Muller, Kerdock and Preparata Codes. Quadratic-Residue Codes. Bounds on the Size of a Code. Methods for Combining Codes. Self-dual Codes and Invariant Theory. The Golay Codes. Association Schemes. Appendix A. Tables of the Best Codes Known. Appendix B. Finite Geometries. Bibliography. Index."
]
}
|
1204.3447
|
2952802027
|
Despite the central role of mobility in wireless networks, analytical study on its impact on network performance is notoriously difficult. This paper aims to address this gap by proposing a random waypoint (RWP) mobility model defined on the entire plane and applying it to analyze two key cellular network parameters: handover rate and sojourn time. We first analyze the stochastic properties of the proposed model and compare it to two other models: the classical RWP mobility model and a synthetic truncated Levy walk model which is constructed from real mobility trajectories. The comparison shows that the proposed RWP mobility model is more appropriate for the mobility simulation in emerging cellular networks, which have ever-smaller cells. Then we apply the proposed model to cellular networks under both deterministic (hexagonal) and random (Poisson) base station (BS) models. We present analytic expressions for both handover rate and sojourn time, which have the expected property that the handover rate is proportional to the square root of BS density. Compared to an actual BS distribution, we find that the Poisson-Voronoi model is about as accurate in terms of mobility evaluation as hexagonal model, though being more pessimistic in that it predicts a higher handover rate and lower sojourn time.
|
The proposed RWP model is based on the one originally proposed in @cite_18 . Due to its simplicity in modeling movement patterns of mobile nodes, the classical RWP mobility model has been extensively studied in literature @cite_42 @cite_5 @cite_41 @cite_21 . These studies analyzed the various stochastic mobility parameters, including transition length, transition time, direction switch rate, and spatial node distribution. When it comes to applying the mobility model to hexagonal cellular networks, simulations are often required to study the impact of mobility since the analysis is hard to proceed @cite_36 @cite_37 . Nonetheless, the effects of the classical RWP mobility model to cellular networks have been briefly analyzed in @cite_41 and a more detailed study can be found in @cite_13 . However, as remarked above, the classical RWP model may not be convenient in some cases. In contrast, we analyze and obtain insight about the impact of mobility under a hexagonal model through applying the relatively clean characterization of the proposed RWP model, as @cite_13 did through applying the classical RWP model.
|
{
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_41",
"@cite_36",
"@cite_42",
"@cite_21",
"@cite_5",
"@cite_13"
],
"mid": [
"2157457404",
"",
"1966415263",
"2137463529",
"1968560005",
"",
"",
"2044008490"
],
"abstract": [
"An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its destination, due to the limited range of each mobile host’s wireless transmissions. This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing. The protocol adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently. Based on results from a packet-level simulation of mobile hosts operating in an ad hoc network, the protocol performs well over a variety of environmental conditions such as host density and movement rates. For all but the highest rates of host movement simulated, the overhead of the protocol is quite low, falling to just 1 of total data packets transmitted for moderate movement rates in a network of 24 mobile hosts. In all cases, the difference in length between the routes used and the optimal route lengths is negligible, and in most cases, route lengths are on average within a factor of 1.01 of optimal.",
"",
"The random waypoint model is a commonly used mobility model in the simulation of ad hoc networks. It is known that the spatial distribution of network nodes moving according to this model is, in general, nonuniform. However, a closed-form expression of this distribution and an in-depth investigation is still missing. This fact impairs the accuracy of the current simulation methodology of ad hoc networks and makes it impossible to relate simulation-based performance results to corresponding analytical results. To overcome these problems, we present a detailed analytical study of the spatial node distribution generated by random waypoint mobility. More specifically, we consider a generalization of the model in which the pause time of the mobile nodes is chosen arbitrarily in each waypoint and a fraction of nodes may remain static for the entire simulation time. We show that the structure of the resulting distribution is the weighted sum of three independent components: the static, pause, and mobility component. This division enables us to understand how the model's parameters influence the distribution. We derive an exact equation of the asymptotically stationary distribution for movement on a line segment and an accurate approximation for a square area. The good quality of this approximation is validated through simulations using various settings of the mobility parameters. In summary, this article gives a fundamental understanding of the behavior of the random waypoint model.",
"This paper describes current and proposed protocols for mobility management for public land mobile network (PLMN)-based networks, mobile Internet protocol (IP) wireless asynchronous transfer mode (ATM) and satellite networks. The integration of these networks will be discussed in the context of the next evolutionary step of wireless communication networks. First, a review is provided of location management algorithms for personal communication systems (PCS) implemented over a PLMN network. The latest protocol changes for location registration and handoff are investigated for mobile IP followed by a discussion of proposed protocols for wireless ATM and satellite networks. Finally, an outline of open problems to be addressed by the next generation of wireless network service is discussed.",
"The random waypoint model is a commonly used mobility model for simulations of wireless communication networks. By giving a formal description of this model in terms of a discrete-time stochastic process, we investigate some of its fundamental stochastic properties with respect to: (a) the transition length and time of a mobile node between two waypoints, (b) the spatial distribution of nodes, (c) the direction angle at the beginning of a movement transition, and (d) the cell change rate if the model is used in a cellular-structured system area. The results of this paper are of practical value for performance analysis of mobile networks and give a deeper understanding of the behavior of this mobility model. Such understanding is necessary to avoid misinterpretation of simulation results. The movement duration and the cell change rate enable us to make a statement about the \"degree of mobility\" of a certain simulation scenario. Knowledge of the spatial node distribution is essential for all investigations in which the relative location of the mobile nodes is important. Finally, the direction distribution explains in an analytical manner the effect that nodes tend to move back to the middle of the system area.",
"",
"",
"In this paper we study the so-called random waypoint (RWP) mobility model in the context of cellular networks. In the RWP model the nodes, i.e., mobile users, move along a zigzag path consisting of straight legs from one waypoint to the next. Each waypoint is assumed to be drawn from the uniform distribution over the given convex domain. In this paper we characterise the key performance measures, mean handover rate and mean sojourn time from the point of view of an arbitrary cell, as well as the mean handover rate in the network. To this end, we present an exact analytical formula for the mean arrival rate across an arbitrary curve. This result together with the pdf of the node location, allows us to compute all other interesting measures. The results are illustrated by several numerical examples. For instance, as a straightforward application of these results one can easily adjust the model parameters in a simulation so that the scenario matches well with, e.g., the measured sojourn times in a cell."
]
}
|
1204.2581
|
2152877576
|
In this paper we address the problem of modeling relational data, which appear in many applications such as social network analysis, recommender systems and bioinformatics. Previous studies either consider latent feature based models but disregarding local structure in the network, or focus exclusively on capturing local structure of objects based on latent blockmodels without coupling with latent characteristics of objects. To combine the benefits of the previous work, we propose a novel model that can simultaneously incorporate the effect of latent features and covariates if any, as well as the effect of latent structure that may exist in the data. To achieve this, we model the relation graph as a function of both latent feature factors and latent cluster memberships of objects to collectively discover globally predictive intrinsic properties of objects and capture latent block structure in the network to improve prediction performance. We also develop an optimization transfer algorithm based on the generalized EM-style strategy to learn the latent factors. We prove the efficacy of our proposed model through the link prediction task and cluster analysis task, and extensive experiments on the synthetic data and several real world datasets suggest that our proposed LFBM model outperforms the other state of the art approaches in the evaluated tasks.
|
: Latent Feature Models are based on matrix or tensor factorization, which learn a distributed representation for each object and each relation, and then make predictions by taking appropriate inner products. Their strength lies in the relative ease of their continuous optimization, and in their excellent predictive performance. The representative model is the Multiplicative Latent Factor Model (MLFM) @cite_1 and the Generalized Latent Factor Model (GLFM) @cite_5 . For example, MLFM includes both the latent class model and the latent distance model as special cases, can somewhat capture both homophily and stochastic equivalence in networks. However, this kinds of latent factor models are often hard to understand and to analyze the learned latent structure. There is a log-linear model with latent features for dyadic prediction in relational data @cite_16 .
|
{
"cite_N": [
"@cite_5",
"@cite_16",
"@cite_1"
],
"mid": [
"1348457",
"2106502572",
"1996919757"
],
"abstract": [
"Homophily and stochastic equivalence are two primary features of interest in social networks. Recently, the multiplicative latent factor model (MLFM) is proposed to model social networks with directed links. Although MLFM can capture stochastic equivalence, it cannot model well homophily in networks. However, many real-world networks exhibit homophily or both homophily and stochastic equivalence, and hence the network structure of these networks cannot be modeled well by MLFM. In this paper, we propose a novel model, called generalized latent factor model (GLFM), for social network analysis by enhancing homophily modeling in MLFM. We devise a minorization-maximization (MM) algorithm with linear-time complexity and convergence guarantee to learn the model parameters. Extensive experiments on some real-world networks show that GLFM can effectively model homophily to dramatically outperform state-of-the-art methods.",
"In dyadic prediction, labels must be predicted for pairs (dyads) whose members possess unique identifiers and, sometimes, additional features called side-information. Special cases of this problem include collaborative filtering and link prediction. We present a new log-linear model for dyadic prediction that is the first to satisfy several important desiderata: (i) labels may be ordinal or nominal, (ii) side-information can be easily exploited if present, (iii) with or without side-information, latent features are inferred for dyad members, (iv) the model is resistant to sample-selection bias, (v) it can learn well-calibrated probabilities, and (vi) it can scale to large datasets. To our knowledge, no existing method satisfies all the above criteria. In particular, many methods assume that the labels are binary or numerical, and cannot use side-information. Experimental results show that the new method is competitive with previous specialized methods for collaborative filtering and link prediction. Other experimental results demonstrate that the new method succeeds for dyadic prediction tasks where previous methods cannot be used. In particular, the new method predicts nominal labels accurately, and by using side-information it solves the cold-start problem in collaborative filtering.",
"We discuss a statistical model of social network data derived from matrix representations and symmetry considerations. The model can include known predictor information in the form of a regression term, and can represent additional structure via sender-specific and receiver-specific latent factors. This approach allows for the graphical description of a social network via the latent factors of the nodes, and provides a framework for the prediction of missing links in network data."
]
}
|
1204.2581
|
2152877576
|
In this paper we address the problem of modeling relational data, which appear in many applications such as social network analysis, recommender systems and bioinformatics. Previous studies either consider latent feature based models but disregarding local structure in the network, or focus exclusively on capturing local structure of objects based on latent blockmodels without coupling with latent characteristics of objects. To combine the benefits of the previous work, we propose a novel model that can simultaneously incorporate the effect of latent features and covariates if any, as well as the effect of latent structure that may exist in the data. To achieve this, we model the relation graph as a function of both latent feature factors and latent cluster memberships of objects to collectively discover globally predictive intrinsic properties of objects and capture latent block structure in the network to improve prediction performance. We also develop an optimization transfer algorithm based on the generalized EM-style strategy to learn the latent factors. We prove the efficacy of our proposed model through the link prediction task and cluster analysis task, and extensive experiments on the synthetic data and several real world datasets suggest that our proposed LFBM model outperforms the other state of the art approaches in the evaluated tasks.
|
: The latent structure based models provide building latent blocks for complex networks and allow us to understand and predict unknown interactions between network nodes. For example, stochastic blockmodels @cite_8 adopt mixture models for relational data. In this model, each node is sampled from a cluster based on a multinomial distribution. To allow a node belonging to multiple groups, @cite_4 developed mixed membership stochastic blockmodels, which use a latent Dirichlet allocation prior to model latent membership variables. in @cite_15 proposed the Predictive Discrete Latent Factor (PDLF) model to predict large scale dyadic response variables. The model simultaneously incorporates the effect of covariates and estimates local structure that is induced by interactions among the dyads through a discrete latent factor model. There is also similar work in @cite_12 . Another related research is the relational clustering. @cite_6 and @cite_13 has proposed a general model for relational clustering based on symmetric convex coding.
|
{
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_6",
"@cite_15",
"@cite_13",
"@cite_12"
],
"mid": [
"120654303",
"",
"2106652568",
"2056932183",
"2127137551",
"2123228027"
],
"abstract": [
"We consider the statistical analysis of a collection of unipartite graphs, i.e., multiple matrices of relations among objects of a single type. Such data arise, for example, in biological settings, collections of author-recipient email, and social networks. In many applications, clustering the objects of study or situating them in a low dimensional space (e.g., a simplex) is only one of the goals of the analysis. Begin able to estimate relational structures among the clusters themselves is often times as important. For example, in biological applications we are interested in estimating how stable protein complexes (i.e., clusters of proteins) interact. To support such integrated data analyses, we develop the family of “stochastic block models of mixed membership”. Our models combine features of mixed-membership models (Erosheva & Fienberg, 2005) and block models for relational data (, 1983) in a hierarchical Bayesian framework. We develop a novel “nested” variational inference scheme, which is necessary to successfully perform fast approximate posterior inference in our models of relational data. We present evidence to support our claims, using both synthetic data and biological case study.",
"",
"Relational data appear frequently in many machine learning applications. Relational data consist of the pairwise relations (similarities or dissimilarities) between each pair of implicit objects, and are usually stored in relation matrices and typically no other knowledge is available. Although relational clustering can be formulated as graph partitioning in some applications, this formulation is not adequate for general relational data. In this paper, we propose a general model for relational clustering based on symmetric convex coding. The model is applicable to all types of relational data and unifies the existing graph partitioning formulation. Under this model, we derive two alternative bound optimization algorithms to solve the symmetric convex coding under two popular distance functions, Euclidean distance and generalized I-divergence. Experimental evaluation and theoretical analysis show the effectiveness and great potential of the proposed model and algorithms.",
"We propose a novel statistical method to predict large scale dyadic response variables in the presence of covariate information. Our approach simultaneously incorporates the effect of covariates and estimates local structure that is induced by interactions among the dyads through a discrete latent factor model. The discovered latent factors provide a redictive model that is both accurate and interpretable. We illustrate our method by working in a framework of generalized linear models, which include commonly used regression techniques like linear regression, logistic regression and Poisson regression as special cases. We also provide scalable generalized EM-based algorithms for model fitting using both \"hard\" and \"soft\" cluster assignments. We demonstrate the generality and efficacy of our approach through large scale simulation studies and analysis of datasets obtained from certain real-world movie recommendation and internet advertising applications.",
"Relational clustering has attracted more and more attention due to its phenomenal impact in various important applications which involve multi-type interrelated data objects, such as Web mining, search marketing, bioinformatics, citation analysis, and epidemiology. In this paper, we propose a probabilistic model for relational clustering, which also provides a principal framework to unify various important clustering tasks including traditional attributes-based clustering, semi-supervised clustering, co-clustering and graph clustering. The proposed model seeks to identify cluster structures for each type of data objects and interaction patterns between different types of objects. Under this model, we propose parametric hard and soft relational clustering algorithms under a large number of exponential family distributions. The algorithms are applicable to relational data of various structures and at the same time unifies a number of stat-of-the-art clustering algorithms: co-clustering algorithms, the k-partite graph clustering, Bregman k-means, and semi-supervised clustering based on hidden Markov random fields.",
"We consider the problem of learning probabilistic models for complex relational structures between various types of objects. A model can help us \"understand\" a dataset of relational facts in at least two ways, by finding interpretable structure in the data, and by supporting predictions, or inferences about whether particular unobserved relations are likely to be true. Often there is a tradeoff between these two aims: cluster-based models yield more easily interpretable representations, while factorization-based approaches have given better predictive performance on large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF) model, which embeds a factorized representation of relations in a nonparametric Bayesian clustering framework. Inference is fully Bayesian but scales well to large data sets. The model simultaneously discovers interpretable clusters and yields predictive performance that matches or beats previous probabilistic models for relational data."
]
}
|
1204.2967
|
2137167328
|
We generalize the Second Oversampling Theorem for wavelet frames and dual wavelet frames from the setting of integer dilations to real dilations. We also study the relationship between dilation matrix oversampling of semi-orthogonal Parseval wavelet frames and the additional shift invariance gain of the core subspace. Oversampling of wavelet frames has been a subject of extensive study by several authors dating back to the early 1990s. The first oversampling results are due to Chui and Shi [16, 17], who proved that oversampling by odd factors preserves tightness of dyadic affine frames. This is now the central result of the subject known as the Second Oversampling Theorem. Its higher dimensional generalizations to integer matrix dilations were studied by Chui and Shi [18], Johnson [27], Laugesen [30], and Ron and Shen [31]. In particular, these authors introduced (in several equivalent forms) the class of oversampling matrices ‘relatively prime’ to a fixed dilation A and they established several oversampling results for (not necessarily tight) affine frames. Dutkay and Jorgensen [23] shed a new light on these results by showing that oversampling of orthonormal (or frame) wavelets by such matrices leads to orthonormal (or frame) superwavelets, respectively. Chui and Sun [22] have completed the understanding of the case of integer dilations by showing that the class of ‘relatively prime’ matrices is optimal for the Second Oversampling Theorem; that is, if an oversampling matrix falls out of this class, then the oversampling does not preserve a tight frame property in general. However, it is possible to give a characterization of oversampling matrices preserving tightness once affine frame generators are chosen. These
|
In light of the results of Chui and Sun @cite_23 , it is not surprising that, for integer dilations, all of the previously studied conditions on tightness preserving lattices @math are equivalent to our newly introduced condition . We state these conditions in the proposition below.
|
{
"cite_N": [
"@cite_23"
],
"mid": [
"2008731398"
],
"abstract": [
"Abstract Let M be a dilation matrix, Ψ a finite family of L 2 -functions, and P the collection of all nonsingular matrices P such that M, P, and P M P −1 have integer entries. The objective of this paper is two-fold. First, for each P in P , we characterize all tight affine frames X ( Ψ , M ) generated by Ψ such that the over-sampled affine systems X P ( Ψ , M ) relative to the “over-sampling rate” P remain to be tight frames. Second, we characterize all over-sampling rates P ∈ P , such that the over-sampled affine systems X P ( Ψ , M ) are tight frames whenever the affine system X ( Ψ , M ) is a tight frame. Our second result therefore provides a general and precise formulation of the second over-sampling theorem for tight affine frames."
]
}
|
1204.2967
|
2137167328
|
We generalize the Second Oversampling Theorem for wavelet frames and dual wavelet frames from the setting of integer dilations to real dilations. We also study the relationship between dilation matrix oversampling of semi-orthogonal Parseval wavelet frames and the additional shift invariance gain of the core subspace. Oversampling of wavelet frames has been a subject of extensive study by several authors dating back to the early 1990s. The first oversampling results are due to Chui and Shi [16, 17], who proved that oversampling by odd factors preserves tightness of dyadic affine frames. This is now the central result of the subject known as the Second Oversampling Theorem. Its higher dimensional generalizations to integer matrix dilations were studied by Chui and Shi [18], Johnson [27], Laugesen [30], and Ron and Shen [31]. In particular, these authors introduced (in several equivalent forms) the class of oversampling matrices ‘relatively prime’ to a fixed dilation A and they established several oversampling results for (not necessarily tight) affine frames. Dutkay and Jorgensen [23] shed a new light on these results by showing that oversampling of orthonormal (or frame) wavelets by such matrices leads to orthonormal (or frame) superwavelets, respectively. Chui and Sun [22] have completed the understanding of the case of integer dilations by showing that the class of ‘relatively prime’ matrices is optimal for the Second Oversampling Theorem; that is, if an oversampling matrix falls out of this class, then the oversampling does not preserve a tight frame property in general. However, it is possible to give a characterization of oversampling matrices preserving tightness once affine frame generators are chosen. These
|
Laugesen [Theorem 8.3] La02 has proved the Second Oversampling Theorem for dual frames for (and expansive or amplifying) dilations @math under the assumption and, as usual, @math . Within the settings of expansive dilations, Theorem 8.3 in @cite_31 is therefore a special case of Theorem .
|
{
"cite_N": [
"@cite_31"
],
"mid": [
"2149844560"
],
"abstract": [
"The single underlying method of averaging the wavelet functional over translates yields first a new completeness criterion for orthonormal wavelet systems, and then a unified treatment of known results on characterization of wavelets on the Fourier transform side, on preservation of frame bounds by oversampling, and on equivalence of affine and quasiaffine frames. The method applies to multiwavelet systems in all dimensions, to dilation matrices that are in some cases not expanding, and to dual frame pairs."
]
}
|
1204.2967
|
2137167328
|
We generalize the Second Oversampling Theorem for wavelet frames and dual wavelet frames from the setting of integer dilations to real dilations. We also study the relationship between dilation matrix oversampling of semi-orthogonal Parseval wavelet frames and the additional shift invariance gain of the core subspace. Oversampling of wavelet frames has been a subject of extensive study by several authors dating back to the early 1990s. The first oversampling results are due to Chui and Shi [16, 17], who proved that oversampling by odd factors preserves tightness of dyadic affine frames. This is now the central result of the subject known as the Second Oversampling Theorem. Its higher dimensional generalizations to integer matrix dilations were studied by Chui and Shi [18], Johnson [27], Laugesen [30], and Ron and Shen [31]. In particular, these authors introduced (in several equivalent forms) the class of oversampling matrices ‘relatively prime’ to a fixed dilation A and they established several oversampling results for (not necessarily tight) affine frames. Dutkay and Jorgensen [23] shed a new light on these results by showing that oversampling of orthonormal (or frame) wavelets by such matrices leads to orthonormal (or frame) superwavelets, respectively. Chui and Sun [22] have completed the understanding of the case of integer dilations by showing that the class of ‘relatively prime’ matrices is optimal for the Second Oversampling Theorem; that is, if an oversampling matrix falls out of this class, then the oversampling does not preserve a tight frame property in general. However, it is possible to give a characterization of oversampling matrices preserving tightness once affine frame generators are chosen. These
|
Now, by , we have @math , and thus @math . Combining this with and implies that @math . Using the fact that there exists @math such that @math , which follows from formula (2.3) in @cite_9 with @math , we can also deduce that @math . Thus, conditions -- imply that @math , @math , @math , and [ ^* B^ ^* B^ . ] By the equivalence (iv) @math (v) in Proposition applied for the dilation @math we deduce that holds for @math . If @math , then @math , and thus holds for all @math .
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2014169471"
],
"abstract": [
"In this paper we extend the investigation of quasi-affine systems, which were originally introduced by Ron and Shen [J. Funct. Anal. 148 (1997), 408-447] for integer, expansive dilations, to the class of rational, expansive dilations. We show that an affine system is a frame if, and only if, the corresponding family of quasi-affine systems are frames with uniform frame bounds. We also prove a similar equivalence result between pairs of dual affine frames and dual quasi-affine frames. Finally, we uncover some fundamental differences between the integer and rational settings by exhibiting an example of a quasi-affine frame such that its affine counterpart is not a frame."
]
}
|
1204.1851
|
2110478464
|
We present a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities (STA) detected on video frames. The output is a set of recognised long-term activities (LTA), which are pre-dened temporal combinations of STA. The constraints on the STA that, if satised, lead to the recognition of a LTA, have been expressed using a dialect of the Event Calculus. In order to handle the uncertainty that naturally occurs in human activity recognition, we adapted this dialect to a state-of-the-art probabilistic logic programming framework. We present a detailed evaluation and comparison of the crisp and probabilistic approaches through experimentation on a benchmark dataset of human surveillance videos.
|
Numerous recognition systems have been proposed in the literature @cite_6 . In this section we focus on long-term activity (LTA) recognition systems that, similar to our approach, exhibit a formal, declarative semantics.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"1992614040"
],
"abstract": [
"An increasing number of distributed applications requires processing continuously flowing data from geographically distributed sources at unpredictable rate to obtain timely responses to complex queries. Examples of such applications come from the most disparate fields: from wireless sensor networks to financial tickers, from traffic management to click stream inspection. These requirements led to the development of a number of systems specifically designed to process information as a flow according to a set of pre-deployed processing rules. We collectively call them Information Flow Processing (IFP) Systems. Despite having a common goal, IFP systems differ in a wide range of aspects, including architectures, data models, rule languages, and processing mechanisms. In this tutorial we draw a general framework to analyze and compare the results achieved so far in the area of IFP systems. This allows us to offer a systematic overview of the topic, favoring the communication between different communities, and highlighting a number of open issue that still need to be addressed in research."
]
}
|
1204.1851
|
2110478464
|
We present a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities (STA) detected on video frames. The output is a set of recognised long-term activities (LTA), which are pre-dened temporal combinations of STA. The constraints on the STA that, if satised, lead to the recognition of a LTA, have been expressed using a dialect of the Event Calculus. In order to handle the uncertainty that naturally occurs in human activity recognition, we adapted this dialect to a state-of-the-art probabilistic logic programming framework. We present a detailed evaluation and comparison of the crisp and probabilistic approaches through experimentation on a benchmark dataset of human surveillance videos.
|
A fair amount of recognition systems is logic-based. Notable approaches include the Chronicle Recognition System @cite_13 and the hierarchical event representation of @cite_4 . A recent review of logic-based recognition systems may be found in (Artikis, artikisKER ). These systems are common in that they employ logic-based methods for representation and inference, but are unable to handle noise.
|
{
"cite_N": [
"@cite_13",
"@cite_4"
],
"mid": [
"2167254662",
"2028172653"
],
"abstract": [
"This article falls under the problem of the symbolic monitoring of real-time complex systems or of video interpretation systems. Among the various techniques used for the on-line monitoring, we are interested here in the temporal scenario recognition. In order to reduce the complexity of the recognition and, consequently, to improve its performance, we explore two methods: the first one is the focus on particular events (in practice, uncommon ones) and the second one is the factorization of common temporal scenarios in order to do a hierarchical recognition. In this article, we present both concepts and merge them to propose a focused hierarchical recognition. This approach merges and generalizes the two main approaches in symbolic recognition of temporal scenarios: the Store Totally Recognized Scenarios (STRS) approach and the Store Partially Recognized Scenarios (SPRS) approach.",
"In this paper, we model multi-agent events in terms of a temporally varying sequence of sub-events, and propose a novel approach for learning, detecting and representing events in videos. The proposed approach has three main steps. First, in order to learn the event structure from training videos, we automatically encode the sub-event dependency graph, which is the learnt event model that depicts the conditional dependency between sub-events. Second, we pose the problem of event detection in novel videos as clustering the maximally correlated sub-events using normalized cuts. The principal assumption made in this work is that the events are composed of a highly correlated chain of sub-events that have high weights (association) within the cluster and relatively low weights (disassociation) between the clusters. The event detection does not require prior knowledge of the number of agents involved in an event and does not make any assumptions about the length of an event. Third, we recognize the fact that any abstract event model should extend to representations related to human understanding of events. Therefore, we propose an extension of CASE representation of natural languages that allows a plausible means of interface between users and the computer. We show results of learning, detection, and representation of events for videos in the meeting, surveillance, and railroad monitoring domains."
]
}
|
1204.1851
|
2110478464
|
We present a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities (STA) detected on video frames. The output is a set of recognised long-term activities (LTA), which are pre-dened temporal combinations of STA. The constraints on the STA that, if satised, lead to the recognition of a LTA, have been expressed using a dialect of the Event Calculus. In order to handle the uncertainty that naturally occurs in human activity recognition, we adapted this dialect to a state-of-the-art probabilistic logic programming framework. We present a detailed evaluation and comparison of the crisp and probabilistic approaches through experimentation on a benchmark dataset of human surveillance videos.
|
A ProbLog-based method for robotic action recognition is proposed in @cite_25 . The method employs a relational extension of the @cite_35 in order to represent multi-object interactions in a scene. Affordances can model the relations between objects, actions (that is, pre-programmed robotic arm movements) and effects of actions. In contrast to a standard propositional Bayesian Network implementation of an affordance model, the method can scale to multiple object interactions in a scene without the need of retraining. However, the proposed method does not include temporal representation and reasoning.
|
{
"cite_N": [
"@cite_35",
"@cite_25"
],
"mid": [
"1933657216",
"2028798328"
],
"abstract": [
"Contents: Preface. Introduction. Part I: The Environment To Be Perceived.The Animal And The Environment. Medium, Substances, Surfaces. The Meaningful Environment. Part II: The Information For Visual Perception.The Relationship Between Stimulation And Stimulus Information. The Ambient Optic Array. Events And The Information For Perceiving Events. The Optical Information For Self-Perception. The Theory Of Affordances. Part III: Visual Perception.Experimental Evidence For Direct Perception: Persisting Layout. Experiments On The Perception Of Motion In The World And Movement Of The Self. The Discovery Of The Occluding Edge And Its Implications For Perception. Looking With The Head And Eyes. Locomotion And Manipulation. The Theory Of Information Pickup And Its Consequences. Part IV: Depiction.Pictures And Visual Awareness. Motion Pictures And Visual Awareness. Conclusion. Appendixes: The Principal Terms Used in Ecological Optics. The Concept of Invariants in Ecological Optics.",
"Affordances define the action possibilities on an object in the environment and in robotics they play a role in basic cognitive capabilities. Previous works have focused on affordance models for just one object even though in many scenarios they are defined by configurations of multiple objects that interact with each other. We employ recent advances in statistical relational learning to learn affordance models in such cases. Our models generalize over objects and can deal effectively with uncertainty. Two-object interaction models are learned from robotic interaction with the objects in the world and employed in situations with arbitrary numbers of objects. We illustrate these ideas with experimental results of an action recognition task where a robot manipulates objects on a shelf."
]
}
|
1204.1851
|
2110478464
|
We present a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities (STA) detected on video frames. The output is a set of recognised long-term activities (LTA), which are pre-dened temporal combinations of STA. The constraints on the STA that, if satised, lead to the recognition of a LTA, have been expressed using a dialect of the Event Calculus. In order to handle the uncertainty that naturally occurs in human activity recognition, we adapted this dialect to a state-of-the-art probabilistic logic programming framework. We present a detailed evaluation and comparison of the crisp and probabilistic approaches through experimentation on a benchmark dataset of human surveillance videos.
|
Probabilistic graphical models have been applied on a variety of activity recognition applications where uncertainty exists. Activity recognition requires processing streams of timestamped STA and, therefore, numerous activity recognition methods are based on sequential variants of probabilistic graphical models, such as Hidden Markov Models @cite_10 , Dynamic Bayesian Networks @cite_37 and Conditional Random Fields @cite_23 . Compared to logic-based methods, graphical models can naturally handle uncertainty but their propositional structure provides limited representation capabilities. To model LTA that involve a large number of relations among STA, such as interactions between multiple persons and or objects, the structure of the model may become prohibitively large and complex. To overcome such limitations, these models have been extended in order to support more complex relations. Examples of such extensions include representing interactions involving multiple domain objects @cite_17 @cite_15 @cite_36 @cite_24 , capturing long-term dependencies between states @cite_29 , as well as describing a hierarchical composition of activities @cite_1 @cite_28 . However, the lack of a formal representation language makes the definition of complex LTA complicated and the integration of domain background knowledge very hard.
|
{
"cite_N": [
"@cite_37",
"@cite_36",
"@cite_28",
"@cite_29",
"@cite_1",
"@cite_24",
"@cite_23",
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"2110575115",
"",
"1521452179",
"2058886892",
"173802978",
"",
"2147880316",
"2135024229",
"2105594594",
"2152239535"
],
"abstract": [
"",
"",
"Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. We show how to extract a person’s activities and significant places from traces of GPS data. Our system uses hierarchically structured conditional random fields to generate a consistent model of a person’s activities and places. In contrast to existing techniques, our approach takes high-level context into account in order to detect the significant locations of a person. Our experiments show significant improvements over existing techniques. Furthermore, they indicate that our system is able to robustly estimate a person’s activities using a model that is trained from data collected by other persons.",
"We present a new approach to recognizing events in videos. We first detect and track moving objects in the scene. Based on the shape and motion properties of these objects, we infer probabilities of primitive events frame-by-frame by using Bayesian networks. Composite events, consisting of multiple primitive events, over extended periods of time are analyzed by using a hidden, semi-Markov finite state model. This results in more reliable event segmentation compared to the use of standard HMMs in noisy video sequences at the cost of some increase in computational complexity. We describe our approach to reducing this complexity. We demonstrate the effectiveness of our algorithm using both real-world and perturbed data.",
"Many interesting human actions involve multiple interacting agents and also have typical durations. Further, there is an inherent hierarchical organization of these activities. In order to model these we introduce a new family of hidden Markov models (HMMs) that provide compositional state representations in both space and time and also a recursive hierarchical structure for inference at higher levels of abstraction. In particular, we focus on two possible 2-layer structures - the Hierarchical-Semi Parallel Hidden Markov Model (HSPaHMM) and the Hierarchical Parallel Hidden Semi-Markov Model (HPaHSMM). The lower layer of HSPaHMM consists of multiple HMMs for each agent while the top layer consists of a single HSMM. HPaHSMM on the other hand has multiple HSMMs at the lower layer and a Markov chain at the top layer. We present efficient learning and decoding algorithms for these models and then demonstrate them first on synthetic time series data and then in an application for sign language recognition.",
"",
"We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.",
"Dynamic Probabilistic Networks (DPNs) are exploited for modeling the temporal relationships among a set of different object temporal events in the scene for a coherent and robust scene-level behaviour interpretation. In particular, we develop a Dynamically Multi-Linked Hidden Markov Model (DML-HMM) to interpret group activities involving multiple objects captured in an outdoor scene. The model is based on the discovery of salient dynamic interlinks among multiple temporal events using DPNs. Object temporal events are detected and labeled using Gaussian Mixture Models with automatic model order selection. A DML-HMM is built using Schwarz's Bayesian Information Criterion based factorisation resulting in its topology being intrinsically determined by the underlying causality and temporal order among different object events. Our experiments demonstrate that its performance on modelling group activities in a noisy outdoor scene is superior compared to that of a Multi-Observation Hidden Markov Model (MOHMM), a Parallel Hidden Markov Model (PaHMM) and a Coupled Hidden Markov Model (CHMM).",
"The basic theory of Markov chains has been known to mathematicians and engineers for close to 80 years, but it is only in the past decade that it has been applied explicitly to problems in speech processing. One of the major reasons why speech models, based on Markov chains, have not been developed until recently was the lack of a method for optimizing the parameters of the Markov model to match observed signal patterns. Such a method was proposed in the late 1960's and was immediately applied to speech processing in several research institutions. Continued refinements in the theory and implementation of Markov modelling techniques have greatly enhanced the method, leading to a wide range of applications of these models. It is the purpose of this tutorial paper to give an introduction to the theory of Markov models, and to illustrate how they have been applied to problems in speech recognition.",
"We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and classifying dynamic behaviors, popular because they offer dynamic time warping, a training algorithm and a clear Bayesian semantics. However the Markovian framework makes strong restrictive assumptions about the system generating the signal-that it is a single process having a small number of states and an extremely limited state memory. The single-process model is often inappropriate for vision (and speech) applications, resulting in low ceilings on model performance. Coupled HMMs provide an efficient way to resolve many of these problems, and offer superior training speeds, model likelihoods, and robustness to initial conditions."
]
}
|
1204.1851
|
2110478464
|
We present a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities (STA) detected on video frames. The output is a set of recognised long-term activities (LTA), which are pre-dened temporal combinations of STA. The constraints on the STA that, if satised, lead to the recognition of a LTA, have been expressed using a dialect of the Event Calculus. In order to handle the uncertainty that naturally occurs in human activity recognition, we adapted this dialect to a state-of-the-art probabilistic logic programming framework. We present a detailed evaluation and comparison of the crisp and probabilistic approaches through experimentation on a benchmark dataset of human surveillance videos.
|
A method that uses interval-based temporal relations is proposed in @cite_34 . The aim of the method is to determine the most consistent sequence of LTA based on the observations of low-level classifiers. Similar to @cite_5 @cite_32 , the method uses MLN to express LTA. In contrast to @cite_5 @cite_32 , it employs temporal relations based on Allen's Interval Algebra (IA) @cite_16 . In order to avoid the combinatorial explosion of possible intervals that IA may produce, a bottom-up process eliminates the unlikely LTA hypotheses. In @cite_12 @cite_21 a probabilistic extension of Event Logic @cite_22 is proposed in order to perform interval-based activity recognition. Similar to MLN, the method defines a probabilistic model from a set of domain-specific weighted LTA. However, the Event Logic representation avoids the enumeration of all possible interval relations.
|
{
"cite_N": [
"@cite_22",
"@cite_21",
"@cite_32",
"@cite_16",
"@cite_5",
"@cite_34",
"@cite_12"
],
"mid": [
"1771552368",
"1984070907",
"1563737156",
"2161484642",
"1815494026",
"2611113411",
"2137444342"
],
"abstract": [
"This paper presents an implemented system for recognizing the occurrence of events described by simple spatial-motion verbs in short image sequences. The semantics of these verbs is specified with event-logic expressions that describe changes in the state of force-dynamic relations between the participants of the event. An efficient finite representation is introduced for the infinite sets of intervals that occur when describing liquid and semi-liquid events. Additionally, an efficient procedure using this representation is presented for inferring occurrences of compound events, described with event-logic expressions, from occurrences of primitive events. Using force dynamics and event logic to specify the lexical semantics of events allows the system to be more robust than prior systems based on motion profile.",
"This is a theoretical paper that proves that probabilistic event logic (PEL) is MAP-equivalent to its conjunctive normal form (PEL-CNF). This allows us to address the NP-hard MAP inference for PEL in a principled manner. We first map the confidence-weighted formulas from a PEL knowledge base to PEL-CNF, and then conduct MAP inference for PEL-CNF using stochastic local search. Our MAP inference leverages the spanning-interval data structure for compactly representing and manipulating entire sets of time intervals without enumerating them. For experimental evaluation, we use the specific domain of volleyball videos. Our experiments demonstrate that the MAP inference for PEL-CNF successfully detects and localizes volleyball events in the face of different types of synthetic noise introduced in the ground-truth video annotations.",
"We develop a video understanding system for scene elements, such as bus stops, crosswalks, and intersections, that are characterized more by qualitative activities and geometry than by intrinsic appearance. The domain models for scene elements are not learned from a corpus of video, but instead, naturally elicited by humans, and represented as probabilistic logic rules within a Markov Logic Network framework. Human elicited models, however, represent object interactions as they occur in the 3D world rather than describing their appearance projection in some specific 2D image plane. We bridge this gap by recovering qualitative scene geometry to analyze object interactions in the 3D world and then reasoning about scene geometry, occlusions and common sense domain knowledge using a set of meta-rules. The effectiveness of this approach is demonstrated on a set of videos of public spaces.",
"An interval-based temporal logic is introduced, together with a computationally effective reasoning algorithm based on constraint propagation. This system is notable in offering a delicate balance between",
"We address the problem of visual event recognition in surveillance where noise and missing observations are serious problems. Common sense domain knowledge is exploited to overcome them. The knowledge is represented as first-order logic production rules with associated weights to indicate their confidence. These rules are used in combination with a relaxed deduction algorithm to construct a network of grounded atoms, the Markov Logic Network. The network is used to perform probabilistic inference for input queries about events of interest. The system's performance is demonstrated on a number of videos from a parking lot domain that contains complex interactions of people and vehicles.",
"",
"This paper is about detecting and segmenting interrelated events which occur in challenging videos with motion blur, occlusions, dynamic backgrounds, and missing observations. We argue that holistic reasoning about time intervals of events, and their temporal constraints is critical in such domains to overcome the noise inherent to low-level video representations. For this purpose, our first contribution is the formulation of probabilistic event logic (PEL) for representing temporal constraints among events. A PEL knowledge base consists of confidence-weighted formulas from a temporal event logic, and specifies a joint distribution over the occurrence time intervals of all events. Our second contribution is a MAP inference algorithm for PEL that addresses the scalability issue of reasoning about an enormous number of time intervals and their constraints in a typical video. Specifically, our algorithm leverages the spanning-interval data structure for compactly representing and manipulating entire sets of time intervals without enumerating them. Our experiments on interpreting basketball videos show that PEL inference is able to jointly detect events and identify their time intervals, based on noisy input from primitive-event detectors."
]
}
|
1204.1851
|
2110478464
|
We present a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities (STA) detected on video frames. The output is a set of recognised long-term activities (LTA), which are pre-dened temporal combinations of STA. The constraints on the STA that, if satised, lead to the recognition of a LTA, have been expressed using a dialect of the Event Calculus. In order to handle the uncertainty that naturally occurs in human activity recognition, we adapted this dialect to a state-of-the-art probabilistic logic programming framework. We present a detailed evaluation and comparison of the crisp and probabilistic approaches through experimentation on a benchmark dataset of human surveillance videos.
|
A MLN-based approach that is complementary to our work is that of @cite_2 , which introduces a probabilistic EC dialect based on MLN. This dialect and Prob-EC tackle the problem of probabilistic inference from different viewpoints. Prob-EC handles noise in the input stream, represented as detection probabilities of the STA. The MLN-based EC dialect, on the other hand, emphasises uncertainty in activity definitions in the form of rule weights.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"1790460212"
],
"abstract": [
"In this paper, we address the issue of uncertainty in event recognition by extending the Event Calculus with probabilistic reasoning. Markov Logic Networks are a natural candidate for our logic-based formalism. However, the temporal semantics of Event Calculus introduce a number of challenges for the proposed model. We show how and under what assumptions we can overcome these problems. Additionally, we demonstrate the advantages of the probabilistic Event Calculus through examples and experiments in the domain of activity recognition, using a publicly available dataset of video surveillance."
]
}
|
1204.1851
|
2110478464
|
We present a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities (STA) detected on video frames. The output is a set of recognised long-term activities (LTA), which are pre-dened temporal combinations of STA. The constraints on the STA that, if satised, lead to the recognition of a LTA, have been expressed using a dialect of the Event Calculus. In order to handle the uncertainty that naturally occurs in human activity recognition, we adapted this dialect to a state-of-the-art probabilistic logic programming framework. We present a detailed evaluation and comparison of the crisp and probabilistic approaches through experimentation on a benchmark dataset of human surveillance videos.
|
ProbLog and MLN are closely related. A notable difference between them is that MLN, as an extension of first-order logic, are not bound by the closed-world assumption. There exists a body of work that investigates the connection between the two frameworks. @cite_0 , for example, developed an extension of ProbLog which is able to handle first-order formulas with weighted constraints. Fierens et. al weightedcnfs converted probabilistic logic programs to ground MLN and then used state-of-the-art MLN inference algorithms to perform inference on the transformed programs.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"1554446114"
],
"abstract": [
"We introduce First Order ProbLog, an extension of first order logic with soft constraints where formulas are guarded by probabilistic facts. The paper defines a semantics for FOProbLog, develops a translation into ProbLog, a system that allows a user to compute the probability of a query in a similar setting restricted to Horn clauses, and reports on initial experience with inference."
]
}
|
1204.2255
|
1879536506
|
Inference of new biological knowledge, e.g., prediction of protein function, from protein-protein interaction (PPI) networks has received attention in the post-genomic era. A popular strategy has been to cluster the network into functionally coherent groups of proteins and predict protein function from the clusters. Traditionally, network research has focused on clustering of nodes. However, why favor nodes over edges, when clustering of edges may be preferred? For example, nodes belong to multiple functional groups, but clustering of nodes typically cannot capture the group overlap, while clustering of edges can. Clustering of adjacent edges that share many neighbors was proposed recently, outperforming different node clustering methods. However, since some biological processes can have characteristic "signatures" throughout the network, not just locally, it may be of interest to consider edges that are not necessarily adjacent. Hence, we design a sensitive measure of the "topological similarity" of edges that can deal with edges that are not necessarily adjacent. We cluster edges that are similar according to our measure in different baker's yeast PPI networks, outperforming existing node and edge clustering approaches.
|
We compare our method to three popular node clustering methods: clique percolation , greedy modularity optimization , and Infomap . Also, we compare it to the existing edge clustering algorithm, edge-SN . Briefly, is the most prominent overlapping node clustering algorithm, is the most popular modularity-based technique, and is often considered the most accurate method available . hierarchically groups adjacent edges whose non-common end-nodes share many neighbors (see below). We did not run these algorithms on the yeast networks ourselves. Instead, we use the results reported by @cite_0 who ran the algorithms on the same networks. For details on how the methods were implemented, see @cite_0 . We do explain how edge-SN was implemented, as we implement our method in the same way (except that we use a different distance metric).
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2110620844"
],
"abstract": [
"Network theory has become pervasive in all sectors of biology, from biochemical signalling to human societies, but identification of relevant functional communities has been impaired by many nodes belonging to several overlapping groups at once, and by hierarchical structures. These authors offer a radically different viewpoint, focusing on links rather than nodes, which allows them to demonstrate that overlapping communities and network hierarchies are two faces of the same issue."
]
}
|
1204.0939
|
2952713079
|
We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs. We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the VDD-hopping model leads to a polynomial solution. Finally, we provide an approximation algorithm for the incremental model, which we extend for the general DVFS model.
|
Reducing the energy consumption of computational platforms is an important research topic, and many techniques at the process, circuit design, and micro-architectural levels have been proposed @cite_11 @cite_14 @cite_31 . The dynamic voltage and frequency scaling (DVFS) technique has been extensively studied, since it may lead to efficient energy performance trade-offs @cite_8 @cite_13 @cite_27 @cite_9 @cite_28 @cite_0 @cite_26 . Current microprocessors (for instance, from AMD @cite_18 and Intel @cite_29 ) allow the speed to be set dynamically. Indeed, by lowering supply voltage, hence processor clock frequency, it is possible to achieve important reductions in power consumption, without necessarily increasing the execution time. We first discuss different optimization problems that arise in this context. Then we review energy models.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_26",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_29",
"@cite_0",
"@cite_27",
"@cite_31",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"2107138419",
"2124146147",
"2002377960",
"2109558263",
"",
"",
"2169826316",
"2138779116",
"",
"2124853018",
"2132114163"
],
"abstract": [
"",
"As an increasing number of electronic systems are powered by batteries, battery life becomes a primary design consideration. Maxiimizing battery life requires system designers to develop an understanding of the capabilities and limitations of the batteries that power such systems, and to incorporate battery considerations into the system design process. Recent research has shown that, the amount of energy that can be supplied by a given battery varies significantly, depending on how the energy is drawn. Consequently, researchers are attempting to develop new battery-driven approaches to system design, which deliver battery life improvements over and beyond what can be achieved through conventional low-power design techniques. This paper presents an introduction to this emerging area, surveys promising technologies that have been developed for battery modeling and battery-efficient system design, and outlines emerging industry standards for smart battery systems.",
"Reducing energy consumption for high end computing can bring various benefits such as, reduce operating costs, increase system reliability, and environment respect. This paper aims to develop scheduling heuristics and to present application experience for reducing power consumption of parallel tasks in a cluster with the Dynamic Voltage Frequency Scaling (DVFS) technique. In this paper, formal models are presented for precedence-constrained parallel tasks, DVFS enabled clusters, and energy consumption. This paper studies the slack time for non-critical jobs, extends their execution time and reduces the energy consumption without increasing the task’s execution time as a whole. Additionally, Green Service Level Agreement is also considered in this paper. By increasing task execution time within an affordable limit, this paper develops scheduling heuristics to reduce energy consumption of a tasks execution and discusses the relationship between energy consumption and task execution time. Models and scheduling heuristics are examined with a simulation study. Test results justify the design and implementation of proposed energy aware scheduling heuristics in the paper.",
"A five-fold increase in leakage current is predicted with each technology generation. While Dynamic Voltage Scaling (DVS) is known to reduce dynamic power consumption, it also causes increased leakage energy drain by lengthening the interval over which a computation is carried out. Therefore, for minimization of the total energy, one needs to determine an operating point, called the critical speed. We compute processor slowdown factors based on the critical speed for energy minimization. Procrastination scheduling attempts to maximize the duration of idle intervals by keeping the processor in a sleep shutdown state even if there are pending tasks, within the constraints imposed by performance requirements. Our simulation experiments show that the critical speed slowdown results in up to 5 energy gains over a leakage oblivious dynamic voltage scaling. Procrastination scheduling scheme extends the sleep intervals to up to 5 times, resulting in up to an additional 18 energy gains, while meeting all timing requirements.",
"Power-aware scheduling problem has been a recent issue in cluster systems not only for operational cost due to electricity cost, but also for system reliability. As recent commodity processors support multiple operating points under various supply voltage levels, Dynamic Voltage Scaling (DVS) scheduling algorithms can reduce power consumption by controlling appropriate voltage levels. In this paper, we provide power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints on DVS-enabled cluster systems in order to minimize power consumption as well as to meet the deadlines specified by application users. A bag-of-tasks application should finish all the sub-tasks before the deadline, so that the DVS scheduling scheme should consider the deadline as well. We provide the DVS scheduling algorithms for both time-shared and space-shared resource sharing policies. The simulation results show that the proposed algorithms reduce much power consumption compared to static voltage schemes.",
"",
"",
"Along with the prevailing of mobile devices, the demand for efficient power consumption has become one of the major issues in designing embedded system. Dynamic Voltage Scaling (DVS) is a technique that can reduce energy consumption by changing the processor voltage levels dynamically. Fixed Priority with Preemption Threshold (FPPT) scheduling is a scheduling policy that includes preemptive and non-preemptive aspect of scheduling policy. In this paper, an efficient universal fixed priority DVS algorithm (FPPT-DVS) will be presented. This algorithm has the advantage of both Fixed Priority Preemptive (FPP) DVS scheduling and Fixed Priority Non-Preemptive (FPNP) DVS scheduling. FPPT-DVS algorithm also combines the on-line DVS and off-line DVS together. Experimental results show that the proposed FPPT-DVS algorithm can save up to 20 energy over the existing DVS algorithms.",
"Speed scaling is a power management technique that involves dynamically changing the speed of a processor. We study policies for setting the speed of the processor for both of the goals of minimizing the energy used and the maximum temperature attained. The theoretical study of speed scaling policies to manage energy was initiated in a seminal paper by [1995], and we adopt their setting. We assume that the power required to run at speed s is P(s) e sα for some constant α > 1. We assume a collection of tasks, each with a release time, a deadline, and an arbitrary amount of work that must be done between the release time and the deadline. [1995] gave an offline greedy algorithm YDS to compute the minimum energy schedule. They further proposed two online algorithms Average Rate (AVR) and Optimal Available (OA), and showed that AVR is 2α − 1 αα-competitive with respect to energy. We provide a tight αα bound on the competitive ratio of OA with respect to energy. We initiate the study of speed scaling to manage temperature. We assume that the environment has a fixed ambient temperature and that the device cools according to Newton's law of cooling. We observe that the maximum temperature can be approximated within a factor of two by the maximum energy used over any interval of length 1 b, where b is the cooling parameter of the device. We define a speed scaling policy to be cooling-oblivious if it is simultaneously constant-competitive with respect to temperature for all cooling parameters. We then observe that cooling-oblivious algorithms are also constant-competitive with respect to energy, maximum speed and maximum power. We show that YDS is a cooling-oblivious algorithm. In contrast, we show that the online algorithms OA and AVR are not cooling-oblivious. We then propose a new online algorithm that we call BKP. We show that BKP is cooling-oblivious. We further show that BKP is e-competitive with respect to the maximum speed, and that no deterministic online algorithm can have a better competitive ratio. BKP also has a lower competitive ratio for energy than OA for α ≥5. Finally, we show that the optimal temperature schedule can be computed offline in polynomial-time using the Ellipsoid algorithm.",
"",
"Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. High-performance, power-aware distributed computing reduces power and energy consumption of distributed applications and systems without sacrificing performance. Generally, we use DVS (Dynamic Voltage Scaling) technology now available in high-performance microprocessors to reduce power consumption during parallel application runs when peak CPU performance is not necessary due to load imbalance, communication delays, etc. We propose distributed performance-directed DVS scheduling strategies for use in scalable power-aware HPC clusters. By varying scheduling granularity we can obtain significant energy savings without increasing execution time (36 for FT from NAS PB). We created a software framework to implement and evaluate our various techniques and show performance-directed scheduling consistently saves more energy (nearly 25 for several codes) than comparable approaches with less impact on execution time (< 5 ). Additionally, we illustrate the use of energy-delay products to automatically select distributed DVS schedules that meet users' needs.",
"This paper presents a novel run-time dynamic voltage scaling scheme for low-power real-time systems. It employs software feedback control of supply voltage, which is applicable to off-the-shelf processors. It avoids interface problems from variable clock frequency. It provides efficient power reduction by fully exploiting slack time arising from workload variation. Using software analysis environment, the proposed scheme is shown to achieve 80 94 power reduction for typical real-time multimedia applications."
]
}
|
1204.0939
|
2952713079
|
We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs. We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the VDD-hopping model leads to a polynomial solution. Finally, we provide an approximation algorithm for the incremental model, which we extend for the general DVFS model.
|
@cite_21 , demonstrate that voltage scaling is far more effective than the shutdown approach, which simply stops the power supply when the system is inactive. Their target processor employs just a few discretely variable voltages. De Langen and Juurlink @cite_30 discuss leakage-aware scheduling heuristics which investigate both DVS and processor shutdown, since static power consumption due to leakage current is expected to increase significantly. @cite_1 consider parallel sparse applications, and they show that when scheduling applications modeled by a directed acyclic graph with a well-identified critical path, it is possible to lower the voltage during non-critical execution of tasks, with no impact on the execution time. Similarly, @cite_26 study the slack time for non-critical jobs, they extend their execution time and thus reduce the energy consumption without increasing the total execution time. @cite_28 provide power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints, based on dynamic voltage scaling. Their goal is to minimize power consumption as well as to meet the deadlines specified by application users.
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_28",
"@cite_21",
"@cite_1"
],
"mid": [
"2133039052",
"2124146147",
"2109558263",
"2106589022",
"2117449401"
],
"abstract": [
"When peak performance is unnecessary, Dynamic Voltage Scaling (DVS) can be used to reduce the dynamic power consumption of embedded multiprocessors. In future technologies, however, static power consumption due to leakage current is expected to increase significantly. Then it will be more effective to limit the number of processors employed (i.e., turn some of them off), or to use a combination of DVS and processor shutdown. In this paper, leakage-aware scheduling heuristics are presented that determine the best trade-off between these three techniques: DVS, processor shutdown, and finding the optimal number of processors. Experimental results obtained using a public benchmark set of task graphs and real parallel applications show that our approach reduces the total energy consumption by up to 46 for tight deadlines (1.5× the critical path length) and by up to 73 for loose deadlines (8× the critical path length) compared to an approach that only employs DVS. We also compare the energy consumed by our scheduling algorithms to two absolute lower bounds, one for the case where all processors continuously run at the same frequency, and one for the case where the processors can run at different frequencies and these frequencies may change over time. The results show that the energy reduction achieved by our best approach is close to these theoretical limits.",
"Reducing energy consumption for high end computing can bring various benefits such as, reduce operating costs, increase system reliability, and environment respect. This paper aims to develop scheduling heuristics and to present application experience for reducing power consumption of parallel tasks in a cluster with the Dynamic Voltage Frequency Scaling (DVFS) technique. In this paper, formal models are presented for precedence-constrained parallel tasks, DVFS enabled clusters, and energy consumption. This paper studies the slack time for non-critical jobs, extends their execution time and reduces the energy consumption without increasing the task’s execution time as a whole. Additionally, Green Service Level Agreement is also considered in this paper. By increasing task execution time within an affordable limit, this paper develops scheduling heuristics to reduce energy consumption of a tasks execution and discusses the relationship between energy consumption and task execution time. Models and scheduling heuristics are examined with a simulation study. Test results justify the design and implementation of proposed energy aware scheduling heuristics in the paper.",
"Power-aware scheduling problem has been a recent issue in cluster systems not only for operational cost due to electricity cost, but also for system reliability. As recent commodity processors support multiple operating points under various supply voltage levels, Dynamic Voltage Scaling (DVS) scheduling algorithms can reduce power consumption by controlling appropriate voltage levels. In this paper, we provide power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints on DVS-enabled cluster systems in order to minimize power consumption as well as to meet the deadlines specified by application users. A bag-of-tasks application should finish all the sub-tasks before the deadline, so that the DVS scheduling scheme should consider the deadline as well. We provide the DVS scheduling algorithms for both time-shared and space-shared resource sharing policies. The simulation results show that the proposed algorithms reduce much power consumption compared to static voltage schemes.",
"A processor consumes far less energy running tasks requiring a low supply voltage than it does executing high-performance tasks. Effective voltage-scheduling techniques take advantage of this situation by using software to dynamically vary supply voltages, thereby minimizing energy consumption and accommodating timing constraints.",
"Sparse and irregular computations constitute a large fraction of applications in the data-intensive scientific domain. While every effort is made to balance the computational workload in such computations across parallel processors, achieving sustained near machine-peak performance with close-to-ideal load balanced computation-to-processor mapping is inherently difficult. As a result, most of the time, the loads assigned to parallel processors can exhibit significant variations. While there have been numerous past efforts that study this imbalance from the performance viewpoint, to our knowledge, no prior study has considered exploiting the imbalance for reducing power consumption during execution. Power consumption in large-scale clusters of workstations is becoming a critical issue as noted by several recent research papers from both industry and academia. Focusing on sparse matrix computations in which underlying parallel computations and data dependencies can be represented by trees, this paper proposes schemes that save power through voltage frequency scaling. Our goal is to reduce overall energy consumption by scaling the voltages frequencies of those processors that are not in the critical path; i.e., our approach is oriented towards saving power without incurring performance penalties."
]
}
|
1204.0939
|
2952713079
|
We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs. We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the VDD-hopping model leads to a polynomial solution. Finally, we provide an approximation algorithm for the incremental model, which we extend for the general DVFS model.
|
For real-time embedded systems, slack reclamation techniques are used. Lee and Sakurai @cite_11 show how to exploit slack time arising from workload variation, thanks to a software feedback control of supply voltage. Prathipati @cite_16 discusses techniques to take advantage of run-time variations in the execution time of tasks; it determines the minimum voltage under which each task can be executed, while guaranteeing the deadlines of each task. Then, experiments are conducted on the Intel StrongArm SA-1100 processor, which has eleven different frequencies, and the Intel PXA250 XScale embedded processor with four frequencies. @cite_25 , the goal of is to schedule a set of independent tasks, given a worst case execution cycle (WCEC) for each task, and a global deadline, while accounting for time and energy penalties when the processor frequency is changing. The frequency of the processor can be lowered when some slack is obtained dynamically, typically when a task runs faster than its WCEC. Yang and Lin @cite_0 discuss algorithms with preemption, using DVS techniques; substantial energy can be saved using these algorithms, which succeed to claim the static and dynamic slack time, with little overhead.
|
{
"cite_N": [
"@cite_0",
"@cite_16",
"@cite_25",
"@cite_11"
],
"mid": [
"2169826316",
"",
"2039088753",
"2132114163"
],
"abstract": [
"Along with the prevailing of mobile devices, the demand for efficient power consumption has become one of the major issues in designing embedded system. Dynamic Voltage Scaling (DVS) is a technique that can reduce energy consumption by changing the processor voltage levels dynamically. Fixed Priority with Preemption Threshold (FPPT) scheduling is a scheduling policy that includes preemptive and non-preemptive aspect of scheduling policy. In this paper, an efficient universal fixed priority DVS algorithm (FPPT-DVS) will be presented. This algorithm has the advantage of both Fixed Priority Preemptive (FPP) DVS scheduling and Fixed Priority Non-Preemptive (FPNP) DVS scheduling. FPPT-DVS algorithm also combines the on-line DVS and off-line DVS together. Experimental results show that the proposed FPPT-DVS algorithm can save up to 20 energy over the existing DVS algorithms.",
"",
"Many real-time systems, such as battery-operated embedded devices, are energy constrained. A common problem for these systems is how to reduce energy consumption in the system as much as possible while still meeting the deadlines; a commonly used power management mechanism by these systems is dynamic voltage scaling (DVS). Usually, the workloads executed by these systems are variable and, more often than not, unpredictable. Because of the unpredictability of the workloads, one cannot guarantee to minimize the energy consumption in the system. However, if the variability of the workloads can be captured by the probability distribution of the computational requirement of each task in the system, it is possible to achieve the goal of minimizing the expected energy consumption in the system. In this paper, we investigate DVS schemes that aim at minimizing expected energy consumption for frame-based hard real-time systems. Our investigation considers various DVS strategies (i.e., intra-task DVS, inter-task DVS, and hybrid DVS) and both an ideal system model (i.e., assuming unrestricted continuous frequency, well-defined power-frequency relation, and no speed change overhead) and a realistic system model (i.e., the processor provides a set of discrete speeds, no assumption is made on power-frequency relation, and speed change overhead is considered). The highlights of the investigation are two practical DVS schemes: Practical PACE (PPACE) for a single task and Practical Inter-Task DVS (PITDVS2) for general frame-based systems. Evaluation results show that our proposed schemes outperform and achieve significant energy savings over existing schemes.",
"This paper presents a novel run-time dynamic voltage scaling scheme for low-power real-time systems. It employs software feedback control of supply voltage, which is applicable to off-the-shelf processors. It avoids interface problems from variable clock frequency. It provides efficient power reduction by fully exploiting slack time arising from workload variation. Using software analysis environment, the proposed scheme is shown to achieve 80 94 power reduction for typical real-time multimedia applications."
]
}
|
1204.0939
|
2952713079
|
We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs. We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the VDD-hopping model leads to a polynomial solution. Finally, we provide an approximation algorithm for the incremental model, which we extend for the general DVFS model.
|
Since an increasing number of systems are powered by batteries, maximizing battery life also is an important optimization problem. Battery-efficient systems can be obtained with similar techniques of dynamic voltage and frequency scaling, as described by in @cite_14 . Another optimization criterion is the energy-delay product, since it accounts for a trade-off between performance and energy consumption, as for instance discussed by Gonzalez and Horowitz in @cite_17 . We do not discuss further these latter optimization problems, since our goal is to minimize the energy consumption, with a fixed deadline.
|
{
"cite_N": [
"@cite_14",
"@cite_17"
],
"mid": [
"2107138419",
"2155470578"
],
"abstract": [
"As an increasing number of electronic systems are powered by batteries, battery life becomes a primary design consideration. Maxiimizing battery life requires system designers to develop an understanding of the capabilities and limitations of the batteries that power such systems, and to incorporate battery considerations into the system design process. Recent research has shown that, the amount of energy that can be supplied by a given battery varies significantly, depending on how the energy is drawn. Consequently, researchers are attempting to develop new battery-driven approaches to system design, which deliver battery life improvements over and beyond what can be achieved through conventional low-power design techniques. This paper presents an introduction to this emerging area, surveys promising technologies that have been developed for battery modeling and battery-efficient system design, and outlines emerging industry standards for smart battery systems.",
"In this paper we investigate possible ways to improve the energy efficiency of a general purpose microprocessor. We show that the energy of a processor depends on its performance, so we chose the energy-delay product to compare different processors. To improve the energy-delay product we explore methods of reducing energy consumption that do not lead to performance loss (i.e. wasted energy), and explore methods to reduce delay by exploiting instruction level parallelism. We found that careful design reduced the energy dissipation by almost 25 . Pipelining can give approximately a 2 spl times improvement in energy-delay product. Superscalar issue, however, does not improve the energy-delay product any further since the overhead required offsets the gains in performance. Further improvements will be hard to come by since a large fraction of the energy (50-80 ) is dissipated in the clock network and the on-chip memories. Thus, the efficiency of processors will depend more on the technology being used and the algorithm chosen by the programmer than the micro-architecture."
]
}
|
1204.0939
|
2952713079
|
We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs. We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the VDD-hopping model leads to a polynomial solution. Finally, we provide an approximation algorithm for the incremental model, which we extend for the general DVFS model.
|
In this paper, the application is a task graph (directed acyclic graph), and we assume that the mapping, i.e., an ordered list of tasks to execute on each processor, is given. Hence, our problem is closely related to slack reclamation techniques, but instead on focusing on non-critical tasks as for instance in @cite_26 , we consider the problem as a whole. Our contribution is to perform an exhaustive complexity study for different energy models. In the next paragraph, we discuss related work on each energy model.
|
{
"cite_N": [
"@cite_26"
],
"mid": [
"2124146147"
],
"abstract": [
"Reducing energy consumption for high end computing can bring various benefits such as, reduce operating costs, increase system reliability, and environment respect. This paper aims to develop scheduling heuristics and to present application experience for reducing power consumption of parallel tasks in a cluster with the Dynamic Voltage Frequency Scaling (DVFS) technique. In this paper, formal models are presented for precedence-constrained parallel tasks, DVFS enabled clusters, and energy consumption. This paper studies the slack time for non-critical jobs, extends their execution time and reduces the energy consumption without increasing the task’s execution time as a whole. Additionally, Green Service Level Agreement is also considered in this paper. By increasing task execution time within an affordable limit, this paper develops scheduling heuristics to reduce energy consumption of a tasks execution and discusses the relationship between energy consumption and task execution time. Models and scheduling heuristics are examined with a simulation study. Test results justify the design and implementation of proposed energy aware scheduling heuristics in the paper."
]
}
|
1204.0939
|
2952713079
|
We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs. We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the VDD-hopping model leads to a polynomial solution. Finally, we provide an approximation algorithm for the incremental model, which we extend for the general DVFS model.
|
The model is used mainly for theoretical studies. For instance, @cite_34 , followed by @cite_27 , aim at scheduling a collection of tasks (with release time, deadline and amount of work), and the solution is the time at which each task is scheduled, but also, the speed at which the task is executed. In these papers, the speed can take any value, hence following the model.
|
{
"cite_N": [
"@cite_27",
"@cite_34"
],
"mid": [
"2138779116",
"2099961254"
],
"abstract": [
"Speed scaling is a power management technique that involves dynamically changing the speed of a processor. We study policies for setting the speed of the processor for both of the goals of minimizing the energy used and the maximum temperature attained. The theoretical study of speed scaling policies to manage energy was initiated in a seminal paper by [1995], and we adopt their setting. We assume that the power required to run at speed s is P(s) e sα for some constant α > 1. We assume a collection of tasks, each with a release time, a deadline, and an arbitrary amount of work that must be done between the release time and the deadline. [1995] gave an offline greedy algorithm YDS to compute the minimum energy schedule. They further proposed two online algorithms Average Rate (AVR) and Optimal Available (OA), and showed that AVR is 2α − 1 αα-competitive with respect to energy. We provide a tight αα bound on the competitive ratio of OA with respect to energy. We initiate the study of speed scaling to manage temperature. We assume that the environment has a fixed ambient temperature and that the device cools according to Newton's law of cooling. We observe that the maximum temperature can be approximated within a factor of two by the maximum energy used over any interval of length 1 b, where b is the cooling parameter of the device. We define a speed scaling policy to be cooling-oblivious if it is simultaneously constant-competitive with respect to temperature for all cooling parameters. We then observe that cooling-oblivious algorithms are also constant-competitive with respect to energy, maximum speed and maximum power. We show that YDS is a cooling-oblivious algorithm. In contrast, we show that the online algorithms OA and AVR are not cooling-oblivious. We then propose a new online algorithm that we call BKP. We show that BKP is cooling-oblivious. We further show that BKP is e-competitive with respect to the maximum speed, and that no deterministic online algorithm can have a better competitive ratio. BKP also has a lower competitive ratio for energy than OA for α ≥5. Finally, we show that the optimal temperature schedule can be computed offline in polynomial-time using the Ellipsoid algorithm.",
"The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s sup p where p spl ges 2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type."
]
}
|
1204.0939
|
2952713079
|
We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs. We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the VDD-hopping model leads to a polynomial solution. Finally, we provide an approximation algorithm for the incremental model, which we extend for the general DVFS model.
|
Recently, a new local dynamic voltage scaling architecture has been developed, based on the model @cite_20 @cite_2 @cite_23 . It was shown in @cite_11 that significant power can be saved by using two distinct voltages, and architectures using this principle have been developed (see for instance @cite_6 ). Compared to traditional power converters, a new design with no needs for large passives or costly technological options has been validated in a STMicroelectronics CMOS 65nm low-power technology @cite_20 .
|
{
"cite_N": [
"@cite_6",
"@cite_23",
"@cite_2",
"@cite_20",
"@cite_11"
],
"mid": [
"2097659403",
"2048361846",
"2111607776",
"",
"2132114163"
],
"abstract": [
"An LSI is fabricated and measured to demonstrate the feasibility of the V sub DD -hopping scheme in a system level. In the scheme, supply voltage, V sub DD , is dynamically controlled through software depending on workload. The V sub DD -hopping scheme is shown to reduce the power to less than 1 4 compared with the conventional fixed-V sub DD scheme. The power saving is achieved without degrading the real-time feature of an MPEG4 system.",
"In complex embedded applications, optimisation and adaptation of both dynamic and leakage power have become an issue at SoC grain. We propose in this paper a complete dynamic voltage and frequency scaling architecture for IP units within a GALS NOC. network-on-chip architecture combined with a globally asynchronous locally synchronous paradigm is a natural enabler for DVFS mechanisms. GALS NoC provides scalable communications and a natural split between timing domains. The proposed low power architecture is based on the association of a local clock generator and a local power control mixing VDD-hopping and super cut-off techniques. No fine control software is required during voltage and frequency scaling. A minimal latency cost is observed together with an efficient local power control.",
"A fully power aware globally asynchronous locally synchronous network-on-chip circuit is presented in this paper. The circuit is arranged around an asynchronous network-on-chip providing a 17 Gbits s throughput and automatically reducing its power consumption by activity detection. Both dynamic and static power consumptions are globally reduced using adaptive design techniques applied locally for each NoC units. The dynamic power consumption can be reduced up to a factor of 8 while the static power consumption is reduced by 2 decades in stand-by mode.",
"",
"This paper presents a novel run-time dynamic voltage scaling scheme for low-power real-time systems. It employs software feedback control of supply voltage, which is applicable to off-the-shelf processors. It avoids interface problems from variable clock frequency. It provides efficient power reduction by fully exploiting slack time arising from workload variation. Using software analysis environment, the proposed scheme is shown to achieve 80 94 power reduction for typical real-time multimedia applications."
]
}
|
1204.0939
|
2952713079
|
We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs. We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the VDD-hopping model leads to a polynomial solution. Finally, we provide an approximation algorithm for the incremental model, which we extend for the general DVFS model.
|
Our work is the first attempt to compare these different models: on the one hand, we assess the impact of the model on the problem complexity (polynomial vs NP-hard), and on the other hand, we provide approximation algorithms building upon these results. The closest work to ours is the paper by @cite_7 , in which the authors also consider the mapping of directed acyclic graphs, and compare the and the models. We go beyond their work in this paper, with an exhaustive complexity study, closed-form formulas for the continuous model, and the comparison with the and models.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2078308569"
],
"abstract": [
"In this paper, we present a two-phase framework that integrates task assignment, ordering and voltage selection (VS) together to minimize energy consumption of real-time dependent tasks executing on a given number of variable voltage processors. Task assignment and ordering in the first phase strive to maximize the opportunities that can be exploited for lowering voltage levels during the second phase, i.e., voltage selection. In the second phase, we formulate the VS problem as an Integer Programming (IP) problem and solve the IP efficiently. Experimental results demonstrate that our framework is very effective in executing tasks at lower voltage levels under different system configurations."
]
}
|
1204.1458
|
1645933739
|
Mobile phones have developed into complex platforms with large numbers of installed applications and a wide range of sensitive data. Application security policies limit the permissions of each installed application. As applications may interact, restricting single applications may create a false sense of security for end users, while data may still leave the mobile phone through other applications. Instead, the information flow needs to be policed for the composite system of applications in a transparent manner. In this paper, we propose to employ static analysis, based on the software architecture and focused on data-flow analysis, to detect information flows between components. Specifically, we aim to reveal transitivity-of-trust problems in multi-component mobile platforms. We demonstrate the feasibility of our approach with two Android applications.
|
In another approach, type-based security combines annotations with dependence graph-based information flow control @cite_20 . Hammer's proposed analysis uses the Java bytecode and a succinct security policy specification that is inserted as annotations in code comments. Although both approaches aim to detect information flow violations of Java-based applications, they differ in the analysis techniques they use. We employ an analysis approach using the RFG to restrict the search space and thereafter carrying out a more focused analysis at the detail level. Hammer uses the complete dependence graph to directly conduct the information flow analysis. In addition, Hammer's method requires code annotations for the security labeling, similar to JFlow. This way, this approach can only be applied by the developer, but neither by the Android Market owner nor the end user.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"13082435"
],
"abstract": [
"Information flow control systems provide the guarantees that are required in today's security-relevant systems. While the literature has produced a wealth of techniques to ensure a given security policy, there is only a small number of implementations, and even these are mostly restricted to theoretical languages or a subset of an existing language. Previously, we presented the theoretical foundations and algorithms for dependence-graph-based information flow control (IFC). As a complement, this paper presents the implementation and evaluation of our new approach, the first implementation of a dependence-graph based analysis that accepts full Java bytecode. It shows that the security policy can be annotated in a succinct manner; and the evaluation shows that the increased runtime of our analysis—a result of being flow-, context-, and object-sensitive—is mitigated by better analysis results and elevated practicability. Finally, we show that the scalability of our analysis is not limited by the sheer size of either the security lattice or the dependence graph that represents the program."
]
}
|
1204.1458
|
1645933739
|
Mobile phones have developed into complex platforms with large numbers of installed applications and a wide range of sensitive data. Application security policies limit the permissions of each installed application. As applications may interact, restricting single applications may create a false sense of security for end users, while data may still leave the mobile phone through other applications. Instead, the information flow needs to be policed for the composite system of applications in a transparent manner. In this paper, we propose to employ static analysis, based on the software architecture and focused on data-flow analysis, to detect information flows between components. Specifically, we aim to reveal transitivity-of-trust problems in multi-component mobile platforms. We demonstrate the feasibility of our approach with two Android applications.
|
Another research approach is the SAINT architecture @cite_34 . It inserts enforcement hooks into Android's middleware layer to improve the currently limited Android security architecture. This work takes semantics such as location and time into account, but strictly focuses on the developer view of permissions and does not account for transitive data flows.
|
{
"cite_N": [
"@cite_34"
],
"mid": [
"2121221235"
],
"abstract": [
"Smartphones are now ubiquitous. However, the security requirements of these relatively new systems and the applications they support are still being understood. As a result, the security infrastructure available in current smartphone operating systems is largely underdeveloped. In this paper, we consider the security requirements of smartphone applications and augment the existing Android operating system with a framework to meet them. We present Secure Application INTeraction (Saint), a modified infrastructure that governs install-time permission assignment and their run-time use as dictated by application provider policy. An in-depth description of the semantics of application policy is presented. The architecture and technical detail of Saint is given, and areas for extension, optimization, and improvement explored. As we show through concrete example, Saint provides necessary utility for applications to assert and control the security decisions on the platform."
]
}
|
1204.1528
|
2951183003
|
With the increasing popularity of location-based social media applications and devices that automatically tag generated content with locations, large repositories of collaborative geo-referenced data are appearing on-line. Efficiently extracting user preferences from these data to determine what information to recommend is challenging because of the sheer volume of data as well as the frequency of updates. Traditional recommender systems focus on the interplay between users and items, but ignore contextual parameters such as location. In this paper we take a geospatial approach to determine locational preferences and similarities between users. We propose to capture the geographic context of user preferences for items using a relational graph, through which we are able to derive many new and state-of-the-art recommendation algorithms, including combinations of them, requiring changes only in the definition of the edge weights. Furthermore, we discuss several solutions for cold-start scenarios. Finally, we conduct experiments using two real-world datasets and provide empirical evidence that many of the proposed algorithms outperform existing location-aware recommender algorithms.
|
@cite_14 a new similarity measure for computing location recommendations based on a non-overlapping hierarchical taxonomy of locations is presented. The key idea is that co-activity in locations can be better captured if you zoom out to larger and larger locations. Similar to our work they use a panoramio data set to evaluate their algorithm. However, their model assumes a cold-start user is making the query, i.e. a user who has no trace in the geographic context of the query. The reliance on a place taxonomy is also more restricting than our general model.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"1530121710"
],
"abstract": [
"Recommender systems solve an information filtering task. They suggest data objects that seem likely to be relevant to the user based upon previous choices that this user has made. A geographic recommender system recommends items from a library of georeferenced objects such as photographs of touristic sites. A widely-used approach to recommending consists in suggesting the most popular items within the user community. However, these approaches are not able to handle individual differences between users. We ask how to identify less popular geographic objects that are nevertheless of interest to a specific user. Our approach is based on user-based collaborative filtering in conjunction with an prototypical model of geographic places (heatmaps). We discuss four different measures of similarity between users that take into account the spatial semantic derived from the spatial behavior of a user community. We illustrate the method with a real-world use case: recommendations of georeferenced photographs from the public website Panoramio. The evaluation shows that our approach achieves a better recall and precision for the first ten items than recommendations based on the most popular geographic items."
]
}
|
1204.1528
|
2951183003
|
With the increasing popularity of location-based social media applications and devices that automatically tag generated content with locations, large repositories of collaborative geo-referenced data are appearing on-line. Efficiently extracting user preferences from these data to determine what information to recommend is challenging because of the sheer volume of data as well as the frequency of updates. Traditional recommender systems focus on the interplay between users and items, but ignore contextual parameters such as location. In this paper we take a geospatial approach to determine locational preferences and similarities between users. We propose to capture the geographic context of user preferences for items using a relational graph, through which we are able to derive many new and state-of-the-art recommendation algorithms, including combinations of them, requiring changes only in the definition of the edge weights. Furthermore, we discuss several solutions for cold-start scenarios. Finally, we conduct experiments using two real-world datasets and provide empirical evidence that many of the proposed algorithms outperform existing location-aware recommender algorithms.
|
A number of studies have looked at timestamped GPS traces to predict future locations within very restricted geographic regions @cite_3 @cite_20 @cite_22 . Markov models and tensor factorization models are fit to the data and non-personalized predictions of the most likely next location or the most likely activity given a location and time are produced. There is a lot of novel work on automatically detecting geographic context in these papers but the general approach cannot be replicated easily in our scenario since we do not have the same luxury of rich traces as we mainly focus on implicit feedback as input. Relying on GPS traces is furthermore a privacy concern and has scalability and power consumption implications. We produce more personalized recommendations given user-user similarities as opposed to just looking at the most popular or most frequent behavior. Furthermore large Markov models and tensor factorization algorithms tend to be very costly to compute, which would therefore need to be done in an off-line setting whereas we also target real-time recommendations.
|
{
"cite_N": [
"@cite_20",
"@cite_22",
"@cite_3"
],
"mid": [
"2073021764",
"1990815658",
""
],
"abstract": [
"With the increasing popularity of location-based services, such as tour guide and location-based social network, we now have accumulated many location data on the Web. In this paper, we show that, by using the location data based on GPS and users' comments at various locations, we can discover interesting locations and possible activities that can be performed there for recommendations. Our research is highlighted in the following location-related queries in our daily life: 1) if we want to do something such as sightseeing or food-hunting in a large city such as Beijing, where should we go? 2) If we have already visited some places such as the Bird's Nest building in Beijing's Olympic park, what else can we do there? By using our system, for the first question, we can recommend her to visit a list of interesting locations such as Tiananmen Square, Bird's Nest, etc. For the second question, if the user visits Bird's Nest, we can recommend her to not only do sightseeing but also to experience its outdoor exercise facilities or try some nice food nearby. To achieve this goal, we first model the users' location and activity histories that we take as input. We then mine knowledge, such as the location features and activity-activity correlations from the geographical databases and the Web, to gather additional inputs. Finally, we apply a collective matrix factorization method to mine interesting locations and activities, and use them to recommend to the users where they can visit if they want to perform some specific activities and what they can do if they visit some specific places. We empirically evaluated our system using a large GPS dataset collected by 162 users over a period of 2.5 years in the real-world. We extensively evaluated our system and showed that our system can outperform several state-of-the-art baselines.",
"We implemented the Prediction-by-Partial-Match data compression algorithm as a predictor of future locations. Positioning was done using IEEE 802.11 wireless access logs. Several experiments were run to determine how to divide the data for training and testing and how to best represent the data as a string of symbols. Our test data consisted of 198 datasets containing over 28,000 pairs, obtained from the UCSD Wireless Topology Discovery project. Tests of a first-order PPM model revealed a 90 success rate in predicting a user's location given the time. The third-order model, which is given the previous time and location and asked to predict the location at a given time, is correct 92 of the time.",
""
]
}
|
1204.1528
|
2951183003
|
With the increasing popularity of location-based social media applications and devices that automatically tag generated content with locations, large repositories of collaborative geo-referenced data are appearing on-line. Efficiently extracting user preferences from these data to determine what information to recommend is challenging because of the sheer volume of data as well as the frequency of updates. Traditional recommender systems focus on the interplay between users and items, but ignore contextual parameters such as location. In this paper we take a geospatial approach to determine locational preferences and similarities between users. We propose to capture the geographic context of user preferences for items using a relational graph, through which we are able to derive many new and state-of-the-art recommendation algorithms, including combinations of them, requiring changes only in the definition of the edge weights. Furthermore, we discuss several solutions for cold-start scenarios. Finally, we conduct experiments using two real-world datasets and provide empirical evidence that many of the proposed algorithms outperform existing location-aware recommender algorithms.
|
Bayesian networks have also been studied to model and learn patterns in location, time and weather contexts for individual users in @cite_0 . However, compared to our model these models tend to be complex and require expert human knowledge to construct, and furthermore they are not tractable. @cite_10 applies a center of mass model to detect and recommend locations and POIs. This work was before the check-in systems era so now we could just more easily query the check-in services for this information. @cite_9 user-user based CF with location pre-filtering is employed in an explicit voting scenario. The cold start problem is solved by generating random recommendations using pseudo users. We address the problem by incorporating out-of geo context similarities for in-context recommendations which is less ad-hoc.
|
{
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_10"
],
"mid": [
"177101540",
"2165046037",
"2123862364"
],
"abstract": [
"As wireless communication advances, research on location-based services using mobile devices has attracted interest, which provides information and services related to user's physical location. As increasing information and services, it becomes difficult to find a proper service that reflects the individual preference at proper time. Due to the small screen of mobile devices and insufficiency of resources, personalized services and convenient user interface might be useful. In this paper, we propose a map-based personalized recommendation system which reflects user's preference modeled by Bayesian Networks (BN). The structure of BN is built by an expert while the parameter is learned from the dataset. The proposed system collects context information, location, time, weather, and user request from the mobile device and infers the most preferred item to provide an appropriate service by displaying onto the mini map.",
"Internet-based recommender systems have traditionally employed collaborative filtering techniques to deliver relevant \"digital\" results to users. In the mobile Internet however, recommendations typically involve \"physical\" entities (e.g., restaurants), requiring additional user effort for fulfillment. Thus, in addition to the inherent requirements of high scalability and low latency, we must also take into account a \"convenience\" metric in making recommendations. In this paper, we propose an enhanced collaborative filtering solution that uses location as a key criterion for generating recommendations. We frame the discussion in the context of our \"restaurant recommender\" system, and describe preliminary results that indicate the utility of such an approach. We conclude with a look at open issues in this space, and motivate a future discussion on the business impact and implications of mining the data in such systems.",
"Mobile computing adds a mostly unexplored dimension to data mining: user's position is a relevant piece of information, and recommendation systems, selecting and ranking links of interest to the user, have the opportunity to take location into account. In this paper a mobility-aware recommendation system that considers the location of the user to filter recommended links is proposed. To avoid the potential problems and costs of insertion by hand, a new middleware layer, the location broker, maintains a historic database of locations and corresponding links used in the past and develops models relating resources to their spatial usage pattern. These models are used to calculate a preference metric when the current user is asking for resources of interest. Mobility scenarios are described and analyzed in terms of possible user requirements. The features of the PILGRIM mobile recommendation system are outlined together with a preliminary experimental evaluation of different metrics."
]
}
|
1204.1528
|
2951183003
|
With the increasing popularity of location-based social media applications and devices that automatically tag generated content with locations, large repositories of collaborative geo-referenced data are appearing on-line. Efficiently extracting user preferences from these data to determine what information to recommend is challenging because of the sheer volume of data as well as the frequency of updates. Traditional recommender systems focus on the interplay between users and items, but ignore contextual parameters such as location. In this paper we take a geospatial approach to determine locational preferences and similarities between users. We propose to capture the geographic context of user preferences for items using a relational graph, through which we are able to derive many new and state-of-the-art recommendation algorithms, including combinations of them, requiring changes only in the definition of the edge weights. Furthermore, we discuss several solutions for cold-start scenarios. Finally, we conduct experiments using two real-world datasets and provide empirical evidence that many of the proposed algorithms outperform existing location-aware recommender algorithms.
|
The GeoFolk system @cite_4 was designed to take both geographic context and text features into account for various information retrieval tasks such as tag recommendation, content classification and clustering. Experiments show that combining both textual and geographic relevance leads to more accurate results than using the two factors in isolation. Although our methods and use case targets are quite different from this work, the empirical evidence of the influence geographic context has on information retrieval is promising and serves as motivation for our work.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2066882130"
],
"abstract": [
"We describe an approach for multi-modal characterization of social media by combining text features (e.g. tags as a prominent example of short, unstructured text labels) with spatial knowledge (e.g. geotags and coordinates of images and videos). Our model-based framework GeoFolk combines these two aspects in order to construct better algorithms for content management, retrieval, and sharing. The approach is based on multi-modal Bayesian models which allow us to integrate spatial semantics of social media in a well-formed, probabilistic manner. We systematically evaluate the solution on a subset of Flickr data, in characteristic scenarios of tag recommendation, content classification, and clustering. Experimental results show that our method outperforms baseline techniques that are based on one of the aspects alone. The approach described in this contribution can also be used in other domains such as Geoweb retrieval."
]
}
|
1204.1528
|
2951183003
|
With the increasing popularity of location-based social media applications and devices that automatically tag generated content with locations, large repositories of collaborative geo-referenced data are appearing on-line. Efficiently extracting user preferences from these data to determine what information to recommend is challenging because of the sheer volume of data as well as the frequency of updates. Traditional recommender systems focus on the interplay between users and items, but ignore contextual parameters such as location. In this paper we take a geospatial approach to determine locational preferences and similarities between users. We propose to capture the geographic context of user preferences for items using a relational graph, through which we are able to derive many new and state-of-the-art recommendation algorithms, including combinations of them, requiring changes only in the definition of the edge weights. Furthermore, we discuss several solutions for cold-start scenarios. Finally, we conduct experiments using two real-world datasets and provide empirical evidence that many of the proposed algorithms outperform existing location-aware recommender algorithms.
|
@cite_12 a fast context-aware recommendation algorithm is proposed that maintains the features of state-of-the-art multi-tensor matrix factorization while bringing down the complexity of the previously known algorithms from exponential to linear growth in problem size. The main idea is to solve the least-squares optimization problem for each model parameter separately. Our method in contrast achieves scalability by not utilizing any complex matrix factorization.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2002834872"
],
"abstract": [
"The situation in which a choice is made is an important information for recommender systems. Context-aware recommenders take this information into account to make predictions. So far, the best performing method for context-aware rating prediction in terms of predictive accuracy is Multiverse Recommendation based on the Tucker tensor factorization model. However this method has two drawbacks: (1) its model complexity is exponential in the number of context variables and polynomial in the size of the factorization and (2) it only works for categorical context variables. On the other hand there is a large variety of fast but specialized recommender methods which lack the generality of context-aware methods. We propose to apply Factorization Machines (FMs) to model contextual information and to provide context-aware rating predictions. This approach results in fast context-aware recommendations because the model equation of FMs can be computed in linear time both in the number of context variables and the factorization size. For learning FMs, we develop an iterative optimization method that analytically finds the least-square solution for one parameter given the other ones. Finally, we show empirically that our approach outperforms Multiverse Recommendation in prediction quality and runtime."
]
}
|
1204.0897
|
2950803469
|
We propose a new approach to competitive analysis in online scheduling by introducing the novel concept of competitive-ratio approximation schemes. Such a scheme algorithmically constructs an online algorithm with a competitive ratio arbitrarily close to the best possible competitive ratio for any online algorithm. We study the problem of scheduling jobs online to minimize the weighted sum of completion times on parallel, related, and unrelated machines, and we derive both deterministic and randomized algorithms which are almost best possible among all online algorithms of the respective settings. We also generalize our techniques to arbitrary monomial cost functions and apply them to the makespan objective. Our method relies on an abstract characterization of online algorithms combined with various simplifications and transformations. We also contribute algorithmic means to compute the actual value of the best possi- ble competitive ratio up to an arbitrary accuracy. This strongly contrasts all previous manually obtained competitiveness results for algorithms and, most importantly, it reduces the search for the optimal com- petitive ratio to a question that a computer can answer. We believe that our concept can also be applied to many other problems and yields a new perspective on online algorithms in general.
|
The more restricted problem with a global cost function @math has been studied by Epstein et al @cite_22 in the context of universal solutions. They gave an algorithm that produces for any job instance one scheduling solution that is a @math -approximation for any cost function and even under unreliable machine behavior. H "o hn and Jacobs @cite_23 studied the same problem without release dates. They analyzed the performance of Smith's Rule @cite_35 and gave tight approximation guarantees for all convex and all concave functions @math .
|
{
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_23"
],
"mid": [
"1988328380",
"1485083952",
"2044030708"
],
"abstract": [
"",
"We consider scheduling on an unreliable machine that may experience unexpected changes in processing speed or even full breakdowns. We aim for a universal solution that performs well without adaptation for any possible machine behavior. For the objective of minimizing the total weighted completion time, we design a polynomial time deterministic algorithm that finds a universal scheduling sequence with a solution value within 4 times the value of an optimal clairvoyant algorithm that knows the disruptions in advance. A randomized version of this algorithm attains in expectation a ratio of e. We also show that both results are best possible among all universal solutions. As a direct consequence of our results, we answer affirmatively the question of whether a constant approximation algorithm exists for the offline version of the problem when machine unavailability periods are known in advance. When jobs have individual release dates, the situation changes drastically. Even if all weights are equal, there are instances for which any universal solution is a factor of Ω(logn loglogn) worse than an optimal sequence. Motivated by this hardness, we study the special case when the processing time of each job is proportional to its weight. We present a non-trivial algorithm with a small constant performance guarantee.",
"We consider a single-machine scheduling problem. Given some continuous, nondecreasing cost function, we aim to compute a schedule minimizing the weighted total cost, where the cost of each job is determined by the cost function value at its completion time. This problem is closely related to scheduling a single machine with nonuniform processing speed. We show that for piecewise linear cost functions it is strongly NP-hard. The main contribution of this article is a tight analysis of the approximation guarantee of Smith’s rule under any convex or concave cost function. More specifically, for these wide classes of cost functions we reduce the task of determining a worst-case problem instance to a continuous optimization problem, which can be solved by standard algebraic or numerical methods. For polynomial cost functions with positive coefficients, it turns out that the tight approximation ratio can be calculated as the root of a univariate polynomial. We show that this approximation ratio is asymptotically equal to k(k − 1)s(k p 1), denoting by k the degree of the cost function. To overcome unrealistic worst-case instances, we also give tight bounds for the case of integral processing times that are parameterized by the maximum and total processing time."
]
}
|
1204.0897
|
2950803469
|
We propose a new approach to competitive analysis in online scheduling by introducing the novel concept of competitive-ratio approximation schemes. Such a scheme algorithmically constructs an online algorithm with a competitive ratio arbitrarily close to the best possible competitive ratio for any online algorithm. We study the problem of scheduling jobs online to minimize the weighted sum of completion times on parallel, related, and unrelated machines, and we derive both deterministic and randomized algorithms which are almost best possible among all online algorithms of the respective settings. We also generalize our techniques to arbitrary monomial cost functions and apply them to the makespan objective. Our method relies on an abstract characterization of online algorithms combined with various simplifications and transformations. We also contribute algorithmic means to compute the actual value of the best possi- ble competitive ratio up to an arbitrary accuracy. This strongly contrasts all previous manually obtained competitiveness results for algorithms and, most importantly, it reduces the search for the optimal com- petitive ratio to a question that a computer can answer. We believe that our concept can also be applied to many other problems and yields a new perspective on online algorithms in general.
|
The online makespan minimization problem has been extensively studied in a different online paradigm where jobs arrive one by one (see @cite_16 @cite_46 and references therein). Our model, in which jobs arrive online over time, is much less studied. In the identical parallel machine environment, Chen and Vestjens @cite_41 give nearly tight bounds on the optimal competitive ratio, @math , using a natural online variant of the well-known largest processing time first algorithm.
|
{
"cite_N": [
"@cite_41",
"@cite_46",
"@cite_16"
],
"mid": [
"2038303146",
"1966527086",
""
],
"abstract": [
"We consider a parallel machine scheduling problem where jobs arrive over time. A set of independent jobs has to be scheduled on m identical machines, where preemption is not allowed and the number of jobs is unknown in advance. Each job becomes available at its release date, which is not known in advance, and its processing time becomes known at its arrival. We deal with the problem of minimizing the makespan, which is the time by which all jobs have been finished. We propose and analyze the following on-line LPT algorithm: At any time a machine becomes availabe for processing, schedule an available job with the largest processing time. We prove that this algorithm has a performance guarantee of 32, and that this bound is tight. Furthermore, we show that any on-line algorithm will have a performance bound of at least 1.3473. This bound is improved to (5 - @ 5) 2 1.3820 for m = 2.",
"The problem considered here is the same as the one discussed in [G. Galambos and G. J. Woeginger, eds., SIAM J. Comput., 22 (1993), pp. 349--355]. It is an m-machine online scheduling problem in which we wish to minimize the competitive ratio for the makespan objective. In this paper, we show that @math is a lower bound on this competitive ratio for m=4. In particular, we show how to force a lower bound of @math for any positive @math . This reduces the gap between the performance of known algorithms [S. Albers, in Proceedings of the 29th Annual ACM Symposium on Theory of Computing, ACM, New York, 1997, pp. 130--139] and the lower bound. The method used introduces an approach to building the task master's strategy.",
""
]
}
|
1204.0897
|
2950803469
|
We propose a new approach to competitive analysis in online scheduling by introducing the novel concept of competitive-ratio approximation schemes. Such a scheme algorithmically constructs an online algorithm with a competitive ratio arbitrarily close to the best possible competitive ratio for any online algorithm. We study the problem of scheduling jobs online to minimize the weighted sum of completion times on parallel, related, and unrelated machines, and we derive both deterministic and randomized algorithms which are almost best possible among all online algorithms of the respective settings. We also generalize our techniques to arbitrary monomial cost functions and apply them to the makespan objective. Our method relies on an abstract characterization of online algorithms combined with various simplifications and transformations. We also contribute algorithmic means to compute the actual value of the best possi- ble competitive ratio up to an arbitrary accuracy. This strongly contrasts all previous manually obtained competitiveness results for algorithms and, most importantly, it reduces the search for the optimal com- petitive ratio to a question that a computer can answer. We believe that our concept can also be applied to many other problems and yields a new perspective on online algorithms in general.
|
In the offline setting, polynomial time approximation schemes are known for identical @cite_21 and uniform machines @cite_28 . For unrelated machines, the problem is NP-hard to approximate with a better ratio than @math and a @math -approximation is known @cite_36 . If the number of machines is bounded by a constant there is a PTAS @cite_36 .
|
{
"cite_N": [
"@cite_28",
"@cite_21",
"@cite_36"
],
"mid": [
"2014071862",
"2093979815",
"2013170874"
],
"abstract": [
"We present a polynomial approximation scheme for the minimum makespan problem on uniform parallel processors. More specifically, the problem is to find a schedule for a set of independent jobs on a collection of machines of different speeds so that the last job to finish is completed as quickly as possible. We give a family of polynomial-time algorithms @math such that @math delivers a solution that is within a relative error @math of the optimum. This is a dramatic improvement over previously known algorithms; the best performance guarantee previously proved for a polynomial-time algorithm ensured a relative error no more than 40 percent. The technique employed is the dual approximation approach, where infeasible but superoptimal solutions for a related (dual) problem are converted to the desired feasible but possibly suboptimal solution.",
"The problem of scheduling a set of n jobs on m identical machines so as to minimize the makespan time is perhaps the most well-studied problem in the theory of approximation algorithms for NP-hard optimization problems. In this paper the strongest possible type of result for this problem, a polynomial approximation scheme, is presented. More precisely, for each e, an algorithm that runs in time O (( n e) 1 e 2 ) and has relative error at most e is given. In addition, more practical algorithms for e = 1 5 + 2 - k and e = 1 6 + 2 - k , which have running times O ( n ( k + log n )) and O ( n ( km 4 + log n )) are presented. The techniques of analysis used in proving these results are extremely simple, especially in comparison with the baroque weighting techniques used previously. The scheme is based on a new approach to constructing approximation algorithms, which is called dual approximation algorithms, where the aim is to find superoptimal, but infeasible, solutions, and the performance is measured by the degree of infeasibility allowed. This notion should find wide applicability in its own right and should be considered for any optimization problem where traditional approximation algorithms have been particularly elusive.",
"We present a new class of randomized approximation algorithms for unrelated parallel machine scheduling problems with the average weighted completion time objective. The key idea is to assign jobs randomly to machines with probabilities derived from an optimal solution to a linear programming (LP) relaxation in time-indexed variables. Our main results are a @math -approximation algorithm for the model with individual job release dates and a @math -approximation algorithm if all jobs are released simultaneously. We obtain corresponding bounds on the quality of the LP relaxation. It is an interesting implication for identical parallel machine scheduling that jobs are randomly assigned to machines, in which each machine is equally likely. In addition, in this case the algorithm has running time O(n log n) and performance guarantee 2. Moreover, the approximation result for identical parallel machine scheduling applies to the on-line setting in which jobs arrive over time as well, with no difference in performance guarantee."
]
}
|
1204.1185
|
2949977494
|
For complex data types such as multimedia, traditional data management methods are not suitable. Instead of attribute matching approaches, access methods based on object similarity are becoming popular. Recently, this resulted in an intensive research of indexing and searching methods for the similarity-based retrieval. Nowadays, many efficient methods are already available, but using them to build an actual search system still requires specialists that tune the methods and build the system manually. Several attempts have already been made to provide a more convenient high-level interface in a form of query languages for such systems, but these are limited to support only basic similarity queries. In this paper, we propose a new language that allows to formulate content-based queries in a flexible way, taking into account the functionality offered by a particular search engine in use. To ensure this, the language is based on a general data model with an abstract set of operations. Consequently, the language supports various advanced query operations such as similarity joins, reverse nearest neighbor queries, or distinct kNN queries, as well as multi-object and multi-modal queries. The language is primarily designed to be used with the MESSIF framework for content-based searching but can be employed by other retrieval systems as well.
|
The majority of the early proposals for practical query languages are based on SQL or its object-oriented alternative, OQL @cite_6 . Paper @cite_14 describes MOQL, a multimedia query language based on OQL which supports spatial, temporal and containment predicates for searching in image or video. However, similarity-based searching is not supported in MOQL. The authors of @cite_0 introduce new operators sim and match for object similarity and concept-object relevance, respectively. However, it is not possible to limit the similarity or define the way it is evaluated. In @cite_2 , a more flexible similarity operator for near and nearest neighbors is provided but it still does not allow to choose the similarity measure.
|
{
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_6",
"@cite_2"
],
"mid": [
"",
"166959470",
"2123975697",
"2135857164"
],
"abstract": [
"",
"We describe a general multimedia query language, called MOQL, based on ODMG's Object Query Language (OQL). In contrast to previous multimedia query languages that are either designed for one particular medium (e.g. images) or specialized for a particular application (e.g., medical imaging), MOQL is general in its treatment of multiple media and di erent applications. The language includes constructs to capture the temporal and spatial relationships in multimedia data as well as functions for query presentation. We illustrate the language features by query examples. The language is implemented for a multimedia database built on top of ObjectStore.",
"This book is the first of its kind and is produced as a result of the efforts by a consortium of database companies called the Object Database Management Group (ODMG). With this book, standards are defined for object management systems and this will be the foundational book for object-oriented database product.",
"Searching for similar objects (in terms of near and nearest neighbors) of a given query object from a large set is an essential task in many applications. Recent years have seen great progress towards efficient algorithms for this task. This paper takes a query language perspective, equipping SQL with the near and nearest search capability by adding a user-defined-predicate, called NN-UDP. The predicate indicates, among a set of objects, if an object is a near or nearest-neighbor of a given query object. The use of the NN-UDP makes the queries involving similarity searches intuitive to express. Unfortunately, traditional cost-based optimization methods that deal with traditional UDPs do not work well for such SQL queries. Better execution plans are possible with the introduction of a new operator, called NN-OP, which finds the near or nearest neighbors from a set of objects for a given query object. An optimization algorithm proposed in this paper can produce these plans that take advantage of the efficient search algorithms developed in recent years. To assess the proposed optimization algorithm, this paper focuses on applications that deal with streaming time series. Experimental results show that the optimization strategy is effective."
]
}
|
1204.1185
|
2949977494
|
For complex data types such as multimedia, traditional data management methods are not suitable. Instead of attribute matching approaches, access methods based on object similarity are becoming popular. Recently, this resulted in an intensive research of indexing and searching methods for the similarity-based retrieval. Nowadays, many efficient methods are already available, but using them to build an actual search system still requires specialists that tune the methods and build the system manually. Several attempts have already been made to provide a more convenient high-level interface in a form of query languages for such systems, but these are limited to support only basic similarity queries. In this paper, we propose a new language that allows to formulate content-based queries in a flexible way, taking into account the functionality offered by a particular search engine in use. To ensure this, the language is based on a general data model with an abstract set of operations. Consequently, the language supports various advanced query operations such as similarity joins, reverse nearest neighbor queries, or distinct kNN queries, as well as multi-object and multi-modal queries. The language is primarily designed to be used with the MESSIF framework for content-based searching but can be employed by other retrieval systems as well.
|
Much more mature extensions of relational DBMS and SQL are presented in @cite_13 @cite_4 @cite_9 . The concept of @cite_13 @cite_4 enables to integrate similarity queries into SQL, using new data types with associated similarity measures and extended functionality of the select command. The authors also describe the processing of such extended SQL and discuss optimization issues. Even though the proposed SQL extension is less flexible than we need, the presented concept is sound and elaborate. The study @cite_9 only deals with image retrieval but also presents an extension of the PostgreSQL database management system that enables to define feature extractors, create access methods and query objects by similarity. This solution is less complex than the previous one but on the other hand, it allows users to adjust the weights of individual features for the evaluation of similarity.
|
{
"cite_N": [
"@cite_9",
"@cite_13",
"@cite_4"
],
"mid": [
"1986842437",
"",
"2121346118"
],
"abstract": [
"The last decade witnessed a growing interest in research on content-based image retrieval (CBIR) and related areas. Several systems for managing and retrieving images have been proposed, each one tailored to a specific application. Functionalities commonly available in CBIR systems include: storage and management of complex data, development of feature extractors to support similarity queries, development of index structures to speed up image retrieval, and design and implementation of an intuitive graphical user interface tailored to each application. To facilitate the development of new CBIR systems, we propose an image-handling extension to the relational database management system (RDBMS) PostgreSQL. This extension, called PostgreSQL-IE, is independent of the application and provides the advantage of being open source and portable. The proposed system extends the functionalities of the structured query language SQL with new functions that are able to create new feature extraction procedures, new feature vectors as combinations of previously defined features, and new access methods, as well as to compose similarity queries. PostgreSQL-IE makes available a new image data type, which permits the association of various images with a given unique image attribute. This resource makes it possible to combine visual features of different images in the same feature vector. To validate the concepts and resources available in the proposed extended RDBMS, we propose a CBIR system applied to the analysis of mammograms using PostgreSQL-IE.",
"",
"Modern database applications are increasingly employing database management systems (DBMS) to store multimedia and other complex data. To adequately support the queries required to retrieve these kinds of data, the DBMS need to answer similarity queries. However, the standard structured query language (SQL) does not provide effective support for such queries. This paper proposes an extension to SQL that seamlessly integrates syntactical constructions to express similarity predicates to the existing SQL syntax and describes the implementation of a similarity retrieval engine that allows posing similarity queries using the language extension in a relational DBMS. The engine allows the evaluation of every aspect of the proposed extension, including the data definition language and data manipulation language statements, and employs metric access methods to accelerate the queries. Copyright © 2008 John Wiley & Sons, Ltd."
]
}
|
1204.1185
|
2949977494
|
For complex data types such as multimedia, traditional data management methods are not suitable. Instead of attribute matching approaches, access methods based on object similarity are becoming popular. Recently, this resulted in an intensive research of indexing and searching methods for the similarity-based retrieval. Nowadays, many efficient methods are already available, but using them to build an actual search system still requires specialists that tune the methods and build the system manually. Several attempts have already been made to provide a more convenient high-level interface in a form of query languages for such systems, but these are limited to support only basic similarity queries. In this paper, we propose a new language that allows to formulate content-based queries in a flexible way, taking into account the functionality offered by a particular search engine in use. To ensure this, the language is based on a general data model with an abstract set of operations. Consequently, the language supports various advanced query operations such as similarity joins, reverse nearest neighbor queries, or distinct kNN queries, as well as multi-object and multi-modal queries. The language is primarily designed to be used with the MESSIF framework for content-based searching but can be employed by other retrieval systems as well.
|
Recently, we could also witness interest in XML-based languages for similarity searching. In particular, the MPEG committee has initiated a call for proposal for MPEG Query Format (MPQF). The objective is to enable easier and interoperable access to multimedia data across search engines and repositories. As described in @cite_7 , the MPQF consists of three fundamental parts -- input query type, output query type, and query management tools. The format supports various query types (by example, by keywords, etc.), spatial-temporal queries and queries based on user preferences. It also supports result formatting and foresees service discovery functionality. From among various proposals we may highlight @cite_17 which presents an MPEG-7 query language that also allows to query ontologies described in OWL syntax.
|
{
"cite_N": [
"@cite_7",
"@cite_17"
],
"mid": [
"2057640202",
"1973642028"
],
"abstract": [
"The growth of multimedia is increasing the need for standards for accessing and searching distributed repositories. The moving picture experts group (MPEG) is developing the MPEG query format (MPQF) to standardize this interface as part of MPEG-7. The objective is to make multimedia access and search easier and interoperable across search engines and repositories. This article describes the MPQF and highlights some of the ways it goes beyond today's query languages by providing capabilities for multimedia query-by-example and spatiotemporal queries.",
"We present in this paper the MPEG-7 Query Language (MP7QL), a powerful query language that we have developed for querying MPEG-7 descriptions, as well as its compatible Filtering and Search Preferences (FASP) model. The MP7QL has the MPEG-7 as data model and allows for querying every aspect of an MPEG-7 multimedia content description. It allows the users to express the conditions that should hold for the multimedia content returned to them regarding semantics, low-level visual and audio features and media-related aspects. The MP7QL queries may utilize the users' FASP and Usage History as context, thus allowing for personalized multimedia content retrieval. The FASP model supported is compatible with the MP7QL and has the model of the standard MPEG-7 FASPs as a special case. The proposed FASPs essentially are MP7QL queries. Both the MP7QL and its compatible FASP model allow for the exploitation of domain knowledge encoded using pure MPEG-7 constructs. In addition, they allow the explicit specification of boolean operators and or preference values in order to allow both the combination of the query conditions according to the user intentions and the expression of the importance of the individual conditions for the users. The MP7QL query results are represented as MPEG-7 documents, guaranteeing the closure of the results within the MPEG-7 space. The MP7QL and the FASP model have been expressed using both XML Schema and OWL syntax. An implementation of the MP7QL, on top of an XML Native Database is currently in progress. A real world-world evaluation study on the expressive power of the MP7QL shows that it covers both general purpose and domain specific requirements in multimedia content retrieval."
]
}
|
1204.1185
|
2949977494
|
For complex data types such as multimedia, traditional data management methods are not suitable. Instead of attribute matching approaches, access methods based on object similarity are becoming popular. Recently, this resulted in an intensive research of indexing and searching methods for the similarity-based retrieval. Nowadays, many efficient methods are already available, but using them to build an actual search system still requires specialists that tune the methods and build the system manually. Several attempts have already been made to provide a more convenient high-level interface in a form of query languages for such systems, but these are limited to support only basic similarity queries. In this paper, we propose a new language that allows to formulate content-based queries in a flexible way, taking into account the functionality offered by a particular search engine in use. To ensure this, the language is based on a general data model with an abstract set of operations. Consequently, the language supports various advanced query operations such as similarity joins, reverse nearest neighbor queries, or distinct kNN queries, as well as multi-object and multi-modal queries. The language is primarily designed to be used with the MESSIF framework for content-based searching but can be employed by other retrieval systems as well.
|
Last of all, let us mention several efforts to create easy-to-use query tools that are not based on either XML or SQL. The authors of @cite_21 propose to issue queries via filling table skeletons and issuing weights for individual clauses, with the complex queries being realized by specifying a (visual) condition tree. @cite_15 , a simple language based on Lucene query syntax is proposed. Finally, @cite_10 describes a rich ontological query language that works with structured English sentences but requires advanced image segmentation and domain knowledge.
|
{
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_10"
],
"mid": [
"2156691079",
"2127906531",
"2122027900"
],
"abstract": [
"One of the most important bits of every search engine is the query interface. Complex interfaces may cause users to struggle in learning the handling. An example is the query language SQL. It is really powerful, but usually remains hidden to the common user. On the other hand the usage of current languages for Internet search engines is very simple and straightforward. Even beginners are able to find relevant documents. This paper presents a hybrid query language suitable for both image and text retrieval. It is very similar to those of a full text search engine but also includes some extensions required for content based image retrieval. The language is extensible to cover arbitrary feature vectors and handle fuzzy queries.",
"The visual database query language QBE (query by example) is a classical, declarative query language based on the relational domain calculus. However, due to insufficient support of vagueness QBE is not an appropriate query language for formulating similarity queries required in the context of multimedia databases. In this work we propose the query language WS-QBE which combines a schema to weight query terms as well as concepts from fuzzy logic and QBE into one language. WS-QBE enables a visual, declarative formulation of complex similarity queries. The semantics of WS-QBE is defined by a mapping of WS-QBE queries onto the similarity domain calculus SDC which is proposed here, too.",
"The paper discusses the design and implementation of the oquel query language for content based image retrieval. The retrieval process takes place entirely within the ontological domain defined by the syntax and semantics of the user query. Since the system does not rely on the pre-annotation of images with sentences in the language, the format of text queries is highly flexible. The language is also extensible to allow for the definition of higher level terms such as \"cars\", \"people\", \"buildings\", etc. on the basis of existing language constructs. Images are retrieved by deriving an abstract syntax tree form of a textual user query and probabilistically evaluating it by analysing the composition and perceptual properties of salient image regions in light of the query. The matching process utilises automatically extracted image segmentation and classification information and can incorporate any other feature extraction mechanisms or contextual knowledge available at processing time."
]
}
|
1204.1231
|
1644339822
|
In this paper, we propose a framework to study a general class of strategic behavior in voting, which we call vote operations. We prove the following theorem: if we fix the number of alternatives, generate @math votes i.i.d. according to a distribution @math , and let @math go to infinity, then for any @math , with probability at least @math , the minimum number of operations that are needed for the strategic individual to achieve her goal falls into one of the following four categories: (1) 0, (2) @math , (3) @math , and (4) @math . This theorem holds for any set of vote operations, any individual vote distribution @math , and any integer generalized scoring rule, which includes (but is not limited to) almost all commonly studied voting rules, e.g., approval voting, all positional scoring rules (including Borda, plurality, and veto), plurality with runoff, Bucklin, Copeland, maximin, STV, and ranked pairs. We also show that many well-studied types of strategic behavior fall under our framework, including (but not limited to) constructive destructive manipulation, bribery, and control by adding deleting votes, margin of victory, and minimum manipulation coalition size. Therefore, our main theorem naturally applies to these problems.
|
Three related papers. First, the dichotomy theorem in @cite_3 implies that, (informally) when the votes are drawn i.i.d. from some distribution, with probability that goes to @math the solution to constructive and destructive UCO is either @math or approximately @math for some favored alternatives. However, this result only works for the UCO problem and some distributions over the votes.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2100161145"
],
"abstract": [
"We introduce a class of voting rules called generalized scoring rules. Under such a rule, each vote generates a vector of k scores, and the outcome of the voting rule is based only on the sum of these vectors---more specifically, only on the order (in terms of score) of the sum's components. This class is extremely general: we do not know of any commonly studied rule that is not a generalized scoring rule. We then study the coalitional manipulation problem for generalized scoring rules. We prove that under certain natural assumptions, if the number of manipulators is O(np) (for any p 1 2) and o(n), then the probability that a random profile is manipulable (to any possible winner under the voting rule) is 1--O(e--Ω(n2p--1)). We also show that common voting rules satisfy these conditions (for the uniform distribution). These results generalize earlier results by Procaccia and Rosenschein as well as even earlier results on the probability of an election being tied."
]
}
|
1204.1231
|
1644339822
|
In this paper, we propose a framework to study a general class of strategic behavior in voting, which we call vote operations. We prove the following theorem: if we fix the number of alternatives, generate @math votes i.i.d. according to a distribution @math , and let @math go to infinity, then for any @math , with probability at least @math , the minimum number of operations that are needed for the strategic individual to achieve her goal falls into one of the following four categories: (1) 0, (2) @math , (3) @math , and (4) @math . This theorem holds for any set of vote operations, any individual vote distribution @math , and any integer generalized scoring rule, which includes (but is not limited to) almost all commonly studied voting rules, e.g., approval voting, all positional scoring rules (including Borda, plurality, and veto), plurality with runoff, Bucklin, Copeland, maximin, STV, and ranked pairs. We also show that many well-studied types of strategic behavior fall under our framework, including (but not limited to) constructive destructive manipulation, bribery, and control by adding deleting votes, margin of victory, and minimum manipulation coalition size. Therefore, our main theorem naturally applies to these problems.
|
Second, it was proved in @cite_51 that for any non-redundant generalized scoring rules that satisfy a continuity condition, when the votes are drawn i.i.d. and we let the number of voters @math go to infinity, either with probability that can be arbitrarily close to @math the margin of victory is @math , or with probability that can be arbitrarily close to @math the margin of victory is @math . It is easy to show that for non-redundant voting rules, the margin of victory is never @math or @math . Though it was shown in @cite_51 that many commonly studied voting rules are GSRs that satisfy such continuity condition, in general it is not clear how restrictive the continuity condition is. More importantly, the result only works for the margin of victory problem.
|
{
"cite_N": [
"@cite_51"
],
"mid": [
"2065358633"
],
"abstract": [
"The margin of victory of an election, defined as the smallest number k such that k voters can change the winner by voting differently, is an important measurement for robustness of the election outcome. It also plays an important role in implementing efficient post-election audits, which has been widely used in the United States to detect errors or fraud caused by malfunctions of electronic voting machines. In this paper, we investigate the computational complexity and (in)approximability of computing the margin of victory for various voting rules, including approval voting, all positional scoring rules (which include Borda, plurality, and veto), plurality with runoff, Bucklin, Copeland, maximin, STV, and ranked pairs. We also prove a dichotomy theorem, which states that for all continuous generalized scoring rules, including all voting rules studied in this paper, either with high probability the margin of victory is Θ(√n), or with high probability the margin of victory is Θ(n), where n is the number of voters. Most of our results are quite positive, suggesting that the margin of victory can be efficiently computed. This sheds some light on designing efficient post-election audits for voting rules beyond the plurality rule."
]
}
|
1204.1231
|
1644339822
|
In this paper, we propose a framework to study a general class of strategic behavior in voting, which we call vote operations. We prove the following theorem: if we fix the number of alternatives, generate @math votes i.i.d. according to a distribution @math , and let @math go to infinity, then for any @math , with probability at least @math , the minimum number of operations that are needed for the strategic individual to achieve her goal falls into one of the following four categories: (1) 0, (2) @math , (3) @math , and (4) @math . This theorem holds for any set of vote operations, any individual vote distribution @math , and any integer generalized scoring rule, which includes (but is not limited to) almost all commonly studied voting rules, e.g., approval voting, all positional scoring rules (including Borda, plurality, and veto), plurality with runoff, Bucklin, Copeland, maximin, STV, and ranked pairs. We also show that many well-studied types of strategic behavior fall under our framework, including (but not limited to) constructive destructive manipulation, bribery, and control by adding deleting votes, margin of victory, and minimum manipulation coalition size. Therefore, our main theorem naturally applies to these problems.
|
Third, in @cite_28 , the authors investigated the distribution over the minimum manipulation coalition size for positional scoring rules when the votes are drawn i.i.d. from the uniform distribution. However, it is not clear how their techniques can be extended beyond the uniform distributions and positional scoring rules, which are a very special case of generalized scoring rules. Moreover, the paper only focused on the minimum manipulation coalition size problem.
|
{
"cite_N": [
"@cite_28"
],
"mid": [
"1969134666"
],
"abstract": [
"We consider the problem of manipulation of elections using positional voting rules under impartial culture voter behaviour. We consider both the logical possibility of coalitional manipulation, and the number of voters who must be recruited to form a manipulating coalition. It is shown that the manipulation problem may be well approximated by a very simple linear program in two variables. This permits a comparative analysis of the asymptotic (large-population) manipulability of the various rules. It is seen that the manipulation resistance of positional rules with 5 or 6 (or more) candidates is quite different from the more commonly analyzed three- and four-candidate cases."
]
}
|
1204.1231
|
1644339822
|
In this paper, we propose a framework to study a general class of strategic behavior in voting, which we call vote operations. We prove the following theorem: if we fix the number of alternatives, generate @math votes i.i.d. according to a distribution @math , and let @math go to infinity, then for any @math , with probability at least @math , the minimum number of operations that are needed for the strategic individual to achieve her goal falls into one of the following four categories: (1) 0, (2) @math , (3) @math , and (4) @math . This theorem holds for any set of vote operations, any individual vote distribution @math , and any integer generalized scoring rule, which includes (but is not limited to) almost all commonly studied voting rules, e.g., approval voting, all positional scoring rules (including Borda, plurality, and veto), plurality with runoff, Bucklin, Copeland, maximin, STV, and ranked pairs. We also show that many well-studied types of strategic behavior fall under our framework, including (but not limited to) constructive destructive manipulation, bribery, and control by adding deleting votes, margin of victory, and minimum manipulation coalition size. Therefore, our main theorem naturally applies to these problems.
|
Our results has both negative and positive implications. On the negative side, our results provide yet another evidence that computational complexity is not a strong barrier against strategic behavior, because the strategic individual now has some information about the number of operations that are needed, without spending any computational cost or even without looking at the input instance. Although the estimation of our theorem may not be very precise (because we do not know which of the four cases a given instance belongs to), such estimation may be explored to designing effective algorithms that facilitate strategic behavior. On the positive side, this easiness of computation is not always a bad thing: sometimes we want to do such computation in order to test how robust a given preference-profile is. For example, computing the margin of victory is an important component in designing novel risk-limiting audit methods @cite_36 @cite_50 @cite_51 @cite_9 @cite_1 @cite_4 @cite_42 @cite_37 .
|
{
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_36",
"@cite_9",
"@cite_42",
"@cite_1",
"@cite_50",
"@cite_51"
],
"mid": [
"",
"",
"2404785830",
"2051895880",
"1575808655",
"2112349230",
"2293376539",
"2065358633"
],
"abstract": [
"",
"",
"Efficient post-election audits select the number of machines or precincts to audit based in part on the margin of victory (the number of ballots that must be changed in order to change the outcome); a close election needs more auditing than a landslide victory. For a simple \"first-pastthe-post\" election, the margin is easily computed based on the number of votes the first and second place candidates received. However, for instant runoff voting (IRV) elections, it is not immediately obvious how to compute the margin of victory. This paper presents algorithmic techniques for computing the margin of victory for IRV elections. We evaluate our method by attempting to compute the margin of victory for 25 IRV elections in the United States. The margin of victory computed can then be used to conduct post-election audits more effectively for IRV elections.",
"There are many sources of error in counting votes: the apparent winner might not be the rightful winner. Hand tallies of the votes in a random sample of precincts can be used to test the hypothesis that a full manual recount would find a different outcome. This paper develops a conservative sequential test based on the vote-counting errors found in a hand tally of a simple or stratified random sample of precincts. The procedure includes a natural escalation: If the hypothesis that the apparent outcome is incorrect is not rejected at stage s, more precincts are audited. Eventually, either the hypothesis is rejected-and the apparent outcome is confirmed-or all precincts have been audited and the true outcome is known. The test uses a priori bounds on the overstatement of the margin that could result from error in each precinct. Such bounds can be derived from the reported counts in each precinct and upper hounds on the number of votes cast in each precinct. The test allows errors in different precincts to be treated differently to reflect voting technology or precinct sizes. It is not optimal, but it is conservative: the chance of erroneously confirming the outcome of a contest if a full manual recount would show a different outcome is no larger than the nominal significance level. The approach also gives a conservative P-value for the hypothesis that a full manual recount would find a different outcome, given the errors found in a fixed size sample. This is illustrated with two contests from November, 2006: the U.S. Senate race in Minnesota and a school hoard race for the Sausalito Marin City School District in California, a small contest in which voters could vote for up to three candidates.",
"Risk-limiting post-election audits have a pre-specified minimum chance of requiring a full hand count if the outcome of the contest is not the outcome that a full hand count of the audit trail would show. The first risk-limiting audits were performed in 2008 in California. Three refinements to increase efficiency were tested in Marin and Yolo counties, California, in November 2009. The first refinement is to audit a collection of contests as a group by auditing a random sample of batches of ballots and combining observed discrepancies in the contests represented in those batches in a particular way: the maximum across-contest relative overstatements (MACRO). MACRO audits control the familywise error rate (the chance that one or more incorrect outcomes fails to be corrected by a full hand count) at a cost that can be lower than that of controlling the per-comparison error rate with independent audits. A risk-limiting audit for the entire collection of contests can be built on MACRO using a variety of probability sampling schemes and ways of combining MACRO across batches. The second refinement is to base the test on the Kaplan-Markov confidence bound, drawing batches with probability proportional to an error bound (PPEB) on the MACRO. The Kaplan-Markov bound is especially well suited to sequential testing: After each batch is audited, a simple calculation--a product of fractions--determines whether to audit another batch or to stop the audit and confirm the apparent outcomes. The third refinement is to audit individual ballots rather than larger batches of ballots, comparing the cast vote record (the machine interpretation of the voter's marks on the ballot) to a human interpretation of the voter's intent. Such single-ballot audits can greatly reduce workload: When the outcome is correct, the number of ballots that must be audited to attain a given risk limit is roughly proportional to the number of ballots per batch. All three of these refinements can be used together, resulting in extremely efficient risk-limiting audits.",
"Post-election audits use the discrepancy between machine counts and a hand tally of votes in a random sample of precincts to infer whether error affected the electoral outcome. The maximum relative overstatement of pairwise margins (MRO) quantifies that discrepancy. The electoral outcome a full hand tally shows must agree with the apparent outcome if the MRO is less than 1. This condition is sharper than previous ones when there are more than two candidates or when voters may vote for more than one candidate. For the 2006 U.S. Senate race in Minnesota, a test using MRO gives a P-value of 4.05 for the hypothesis that a full hand tally would find a different winner, less than half the value Stark [Ann. Appl. Statist. 2 (2008) 550-581] finds.",
"A general definition is proposed for the margin of victory of an election contest. That definition is applied to Instant Runoff Voting (IRV) and several estimates for the IRV margin of victory are described: two upper bounds and two lower bounds. Given round-by-round vote totals, the time complexity for calculating these bounds does not exceed O(C2 log C), where C is the number of candidates. It is also shown that calculating the larger and more useful of the two lower bounds can be viewed, in part, as solving a longest path problem on a weighted, directed, acyclic graph. Worst-case analysis shows that neither these estimates, nor any estimates based only on tabulation round-by-round vote totals, are guaranteed to be within a constant factor of the margin of victory. These estimates are calculated for IRV elections in Australia and California. Pseudo code for calculating these estimates is provided.",
"The margin of victory of an election, defined as the smallest number k such that k voters can change the winner by voting differently, is an important measurement for robustness of the election outcome. It also plays an important role in implementing efficient post-election audits, which has been widely used in the United States to detect errors or fraud caused by malfunctions of electronic voting machines. In this paper, we investigate the computational complexity and (in)approximability of computing the margin of victory for various voting rules, including approval voting, all positional scoring rules (which include Borda, plurality, and veto), plurality with runoff, Bucklin, Copeland, maximin, STV, and ranked pairs. We also prove a dichotomy theorem, which states that for all continuous generalized scoring rules, including all voting rules studied in this paper, either with high probability the margin of victory is Θ(√n), or with high probability the margin of victory is Θ(n), where n is the number of voters. Most of our results are quite positive, suggesting that the margin of victory can be efficiently computed. This sheds some light on designing efficient post-election audits for voting rules beyond the plurality rule."
]
}
|
1204.1113
|
2949297756
|
We present a deterministic 2^O(t)q^ (t-2)(t-1)+o(1) algorithm to decide whether a univariate polynomial f, with exactly t monomial terms and degree <q, has a root in F_q. A corollary of our method --- the first with complexity sub-linear in q when t is fixed --- is that the nonzero roots in F_q can be partitioned into at most 2 t-1 (q-1)^ (t-2)(t-1) cosets of two subgroups S_1,S_2 of F^*_q, with S_1 in S_2. Another corollary is the first deterministic sub-linear algorithm for detecting common degree one factors of k-tuples of t-nomials in F_q[x] when k and t are fixed. When t is not fixed we show that each of the following problems is NP-hard with respect to BPP-reductions, even when p is prime: (1) detecting roots in F_p for f, (2) deciding whether the square of a degree one polynomial in F_p[x] divides f, (3) deciding whether the discriminant of f vanishes, (4) deciding whether the gcd of two t-nomials in F_p[x] has positive degree. Finally, we prove that if the complexity of root detection is sub-linear (in a refined sense), relative to the straight-line program encoding, then NEXP is not in P Poly.
|
One should recall that @math @cite_9 . So the conditional assertion of our last theorem indeed implies a new separation of complexity classes. It may actually be the case that there is no algorithm for detecting roots in @math better than brute-force search. Such a result would be in line with the Exponential Time Hypothesis @cite_2 and the widely-held belief in the cryptographic community that the only way to break a well-designed block cipher is by exhaustive search.
|
{
"cite_N": [
"@cite_9",
"@cite_2"
],
"mid": [
"2050393190",
"2494611990"
],
"abstract": [
"Inthis note, we demonstrate that a certain class of naturally occuring problems involving an oracle are solvable in random polynomial time, but not in deterministic polynomial time. This class of problems is especially interesting because a very slight change in the parameters of the problem yields one that does have a polynomial solution.",
"The problem of k-SAT is to determine if the given k-CNF has a satisfying solution. It is a celebrated open question as to whether it requires exponential time to solve k-SAT for k 3.Define s_k (for k 3) to be the infimum of : there exists an O(2^ n ) algorithm for solving k-SAT . Define ETH (Exponential-Time Hypothesis) for k-SAT as follows: for k 3, s_k >0. In other words, for k 3, k-SAT does not have a subexponential-time algorithm.In this paper, we show that s_k is an increasing sequence assuming for k-SAT. Let s_ be the limit of s_k. We will in fact show that s_k (1-d (ek))s_ for some constant d >0."
]
}
|
1204.0171
|
1838161406
|
In this study, a new Stacked Generalization technique called Fuzzy Stacked Generalization (FSG) is proposed to minimize the difference between N -sample and large-sample classification error of the Nearest Neighbor classifier. The proposed FSG employs a new hierarchical distance learning strategy to minimize the error difference. For this purpose, we first construct an ensemble of base-layer fuzzy k- Nearest Neighbor (k-NN) classifiers, each of which receives a different feature set extracted from the same sample set. The fuzzy membership values computed at the decision space of each fuzzy k-NN classifier are concatenated to form the feature vectors of a fusion space. Finally, the feature vectors are fed to a meta-layer classifier to learn the degree of accuracy of the decisions of the base-layer classifiers for meta-layer classification. Rather than the power of the individual base layer-classifiers, diversity and cooperation of the classifiers become an important issue to improve the overall performance of the proposed FSG. A weak base-layer classifier may boost the overall performance more than a strong classifier, if it is capable of recognizing the samples, which are not recognized by the rest of the classifiers, in its own feature space. The experiments explore the type of the collaboration among the individual classifiers required for an improved performance of the suggested architecture. Experiments on multiple feature real-world datasets show that the proposed FSG performs better than the state of the art ensemble learning algorithms such as Adaboost, Random Subspace and Rotation Forest. On the other hand, compatible performances are observed in the experiments on single feature multi-attribute datasets.
|
Various Stacked Generalization architectures are proposed in the literature @cite_0 @cite_2 @cite_4 @cite_30 @cite_42 @cite_33 @cite_48 @cite_39 @cite_26 @cite_13 @cite_28 @cite_9 @cite_19 . Most of them aggregate the decisions of the base-layer classifiers by using vector concatenation operation @cite_0 @cite_2 @cite_4 @cite_30 @cite_42 @cite_33 @cite_48 @cite_39 @cite_26 @cite_13 @cite_9 @cite_19 @cite_11 @cite_47 @cite_10 , or majority voting @cite_28 techniques at the meta-layer.
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_33",
"@cite_28",
"@cite_48",
"@cite_9",
"@cite_42",
"@cite_10",
"@cite_39",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_47",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"1487121041",
"2133935274",
"2110410904",
"2125035899",
"2158615050",
"2162378706",
"2165980810",
"2107430826",
"2099993931",
"",
"2023294425",
"2110840966",
"2094116030",
"2124209734",
"2098017479"
],
"abstract": [
"",
"Stacked Generalization (SG) is an ensemble learning technique, which aims to increase the performance of individual classifiers by combining them under a hierarchical architecture. In many applications, this technique performs better than the individual classifiers. However, in some applications, the performance of the technique goes astray, for the reasons that are not well-known. In this work, the performance of Stacked Generalization technique is analyzed with respect to the performance of the individual classifiers under the architecture. This work shows that the success of the SG highly depends on how the individual classifiers share to learn the training set, rather than the performance of the individual classifiers. The experiments explore the learning mechanisms of SG to achieve the high performance. The relationship between the performance of the individual classifiers and that of SG is also investigated.",
"The main principle of stacked generalization is using a second-level generalizer to combine the outputs of base classifiers in an ensemble. In this paper, after presenting a short survey of the literature on stacked generalization, we propose to use regularized empirical risk minimization (RERM) as a framework for learning the weights of the combiner which generalizes earlier proposals and enables improved learning methods. Our main contribution is using group sparsity for regularization to facilitate classifier selection. In addition, we propose and analyze using the hinge loss instead of the conventional least squares loss. We performed experiments on three different ensemble setups with differing diversities on 13 real-world datasets of various applications. Results show the power of group sparse regularization over the conventional l\"1 norm regularization. We are able to reduce the number of selected classifiers of the diverse ensemble without sacrificing accuracy. With the non-diverse ensembles, we even gain accuracy on average by using group sparse regularization. In addition, we show that the hinge loss outperforms the least squares loss which was used in previous studies of stacked generalization.",
"Most classical template-based frontal face recognition techniques assume that multiple images per person are available for training, while in many real-world applications only one training image per person is available and the test images may be partially occluded or may vary in expressions. This paper addresses those problems by extending a previous local probabilistic approach presented by Martinez, using the self-organizing map (SOM) instead of a mixture of Gaussians to learn the subspace that represented each individual. Based on the localization of the training images, two strategies of learning the SOM topological space are proposed, namely to train a single SOM map for all the samples and to train a separate SOM map for each class, respectively. A soft k nearest neighbor (soft k-NN) ensemble method, which can effectively exploit the outputs of the SOM topological space, is also proposed to identify the unlabeled subjects. Experiments show that the proposed method exhibits high robust performance against the partial occlusions and variant expressions.",
"This article investigates the effectiveness of voting and stacked generalization -also known as stacking- in the context of information extraction (IE). A new stacking framework is proposed that accommodates well-known approaches for IE. The key idea is to perform cross-validation on the base-level data set, which consists of text documents annotated with relevant information, in order to create a meta-level data set that consists of feature vectors. A classifier is then trained using the new vectors. Therefore, base-level IE systems are combined with a common classifier at the meta-level. Various voting schemes are presented for comparing against stacking in various IE domains. Well known IE systems are employed at the base-level, together with a variety of classifiers at the meta-level. Results show that both voting and stacking work better when relying on probabilistic estimates by the base-level systems. Voting proved to be effective in most domains in the experiments. Stacking, on the other hand, proved to be consistently effective over all domains, doing comparably or better than voting and always better than the best base-level systems. Particular emphasis is also given to explaining the results obtained by voting and stacking at the meta-level, with respect to the varying degree of similarity in the output of the base-level systems.",
"Generalization continues to be one of the most important topic in neural networks and other classifiers. In the last number of years, number of different methods have been developed to improve generalization accuracy. Any classifier that uses induction to find the class concept from the training patterns will have a hard time to achieve an acceptable level of generalization accuracy when the problem to be learned is a statistically neutral problem. A problem is statistically neutral if the probability of mapping an input onto an output is always the chance value of 0.5. We examine the generalization behaviour of multilayer neural networks on learning statistically neutral problems using single level learning models (e.g., conventional cross-validation scheme) as well as multiple level learning models (e.g., stacked generalization method). We show that for statistically neutral problems such as parity and majority function, the stacked generalization scheme improves classification performance and generalization accuracy over the single level cross-validation model.",
"In this study, we introduce a new image classification technique using decision fusion. The proposed technique, called Meta-Fuzzified Yield Value (Meta-FYV), is based on two-layer Stacked Generalization (SG) architecture [1]. At the base-layer, the system, receives a set of feature vectors of various dimensions and dynamical ranges and outputs hypotheses through fuzzy transformations. Then, the hypotheses created by the base layer transformations are concatenated for building a regression equation at meta-layer. Experimental evidence indicates that the Meta-FYV is superior compared to one of the most successful Fuzzy SG methods, introduced by Akbas [2].",
"Meta decision trees (MDTs) are a method for combining multiple classifiers. We present an integration of the algorithm MLC4.5 for learning MDTs in the Weka data mining suite. We compare classifier ensembles combined with MDTs to bagged and boosted decision trees, and to classifier ensembles combined with other methods: voting and stacking with three different meta-level classifiers (ordinary decision trees, naive Bayes, and multi-response linear regression, MLR).",
"Music information retrieval (MIR) is an emerging research area that receives growing attention from both the research community and music industry. It addresses the problem of querying and retrieving certain types of music from large music data set. Classification is a fundamental problem in MIR. Many tasks in MIR can be naturally cast in a classification setting, such as genre classification, mood classification, artist recognition, instrument recognition, etc. Music annotation, a new research area in MIR that has attracted much attention in recent years, is also a classification problem in the general sense. Due to the importance of music classification in MIR research, rapid development of new methods, and lack of review papers on recent progress of the field, we provide a comprehensive review on audio-based classification in this paper and systematically summarize the state-of-the-art techniques for music classification. Specifically, we have stressed the difference in the features and the types of classifiers used for different classification tasks. This survey emphasizes on recent development of the techniques and discusses several open issues for future research.",
"This paper first reviews extreme learning machine (ELM) in light of coverpsilas theorem and interpolation for a comparative study with radial-basis function (RBF) networks. To improve generalization performance, a novel method of combining a set of single ELM networks using stacked generalization is proposed. Comparisons and experiment results show that the proposed stacking ELM outperforms a single ELM network for both regression and classification problems.",
"",
"We empirically evaluate several state-of-the-art methods for constructing ensembles of heterogeneous classifiers with stacking and show that they perform (at best) comparably to selecting the best classifier from the ensemble by cross validation. Among state-of-the-art stacking methods, stacking with probability distributions and multi-response linear regression performs best. We propose two extensions of this method, one using an extended set of meta-level features and the other using multi-response model trees to learn at the meta-level. We show that the latter extension performs better than existing stacking approaches and better than selecting the best classifier by cross validation.",
"This paper presents a new method for linearly combining multiple neural network classifiers based on the statistical pattern recognition theory. In our approach, several neural networks are first selected based on which works best for each class in terms of minimizing classification errors. Then, they are linearly combined to form an ideal classifier that exploits the strengths of the individual classifiers. In this approach, the minimum classification error criterion is utilized to estimate the optimal linear weights. In this formulation, because the classification decision rule is incorporated into the cost function, a more suitable better combination of weights for the classification objective could be obtained. Experimental results using artificial and real data sets show that the proposed method can construct a better combined classifier that outperforms the best single classifier in terms of overall classification errors for test data.",
"Recognizing actions in videos is rapidly becoming a topic of much research. To facilitate the development of methods for action recognition, several video collections, along with benchmark protocols, have previously been proposed. In this paper, we present a novel video database, the “Action Similarity LAbeliNg” (ASLAN) database, along with benchmark protocols. The ASLAN set includes thousands of videos collected from the web, in over 400 complex action classes. Our benchmark protocols focus on action similarity (same not-same), rather than action classification, and testing is performed on never-before-seen actions. We propose this data set and benchmark as a means for gaining a more principled understanding of what makes actions different or similar, rather than learning the properties of particular action classes. We present baseline results on our benchmark, and compare them to human performance. To promote further study of action similarity techniques, we make the ASLAN database, benchmarks, and descriptor encodings publicly available to the research community.",
"Automatic image annotation systems available in the literature concatenate color, texture and or shape features in a single feature vector to learn a set of high level semantic categories using a single learning machine. This approach is quite naive to map the visual features to high level semantic information concerning the categories. Concatenation of many features with different visual properties and wide dynamical ranges may result in curse of dimensionality and redundancy problems. Additionally, it usually requires normalization which may cause an undesirable distortion in the feature space. An elegant way of reducing the effects of these problems is to design a dedicated feature space for each image category, depending on its content, and learn a range of visual properties of the whole image from a variety of feature sets. For this purpose, a two-layer ensemble learning system, called Supervised Annotation by Descriptor Ensemble (SADE), is proposed. SADE, initially, extracts a variety of low-level visual descriptors from the image. Each descriptor is, then, fed to a separate learning machine in the first layer. Finally, the meta-layer classifier is trained on the output of the first layer classifiers and the images are annotated by using the decision of the meta-layer classifier. This approach not only avoids normalization, but also reduces the effects of dimensional curse and redundancy. The proposed system outperforms a state-of-the-art automatic image annotation system, in an equivalent experimental setup.",
"Computer vision systems have demonstrated considerable improvement in recognizing and verifying faces in digital images. Still, recognizing faces appearing in unconstrained, natural conditions remains a challenging task. In this paper, we present a face-image, pair-matching approach primarily developed and tested on the “Labeled Faces in the Wild” (LFW) benchmark that reflects the challenges of face recognition from unconstrained images. The approach we propose makes the following contributions. 1) We present a family of novel face-image descriptors designed to capture statistics of local patch similarities. 2) We demonstrate how unlabeled background samples may be used to better evaluate image similarities. To this end, we describe a number of novel, effective similarity measures. 3) We show how labeled background samples, when available, may further improve classification performance, by employing a unique pair-matching pipeline. We present state-of-the-art results on the LFW pair-matching benchmarks. In addition, we show our system to be well suited for multilabel face classification (recognition) problem, on both the LFW images and on images from the laboratory controlled multi-PIE database."
]
}
|
1204.0171
|
1838161406
|
In this study, a new Stacked Generalization technique called Fuzzy Stacked Generalization (FSG) is proposed to minimize the difference between N -sample and large-sample classification error of the Nearest Neighbor classifier. The proposed FSG employs a new hierarchical distance learning strategy to minimize the error difference. For this purpose, we first construct an ensemble of base-layer fuzzy k- Nearest Neighbor (k-NN) classifiers, each of which receives a different feature set extracted from the same sample set. The fuzzy membership values computed at the decision space of each fuzzy k-NN classifier are concatenated to form the feature vectors of a fusion space. Finally, the feature vectors are fed to a meta-layer classifier to learn the degree of accuracy of the decisions of the base-layer classifiers for meta-layer classification. Rather than the power of the individual base layer-classifiers, diversity and cooperation of the classifiers become an important issue to improve the overall performance of the proposed FSG. A weak base-layer classifier may boost the overall performance more than a strong classifier, if it is capable of recognizing the samples, which are not recognized by the rest of the classifiers, in its own feature space. The experiments explore the type of the collaboration among the individual classifiers required for an improved performance of the suggested architecture. Experiments on multiple feature real-world datasets show that the proposed FSG performs better than the state of the art ensemble learning algorithms such as Adaboost, Random Subspace and Rotation Forest. On the other hand, compatible performances are observed in the experiments on single feature multi-attribute datasets.
|
In most of the experimental results given in the aforementioned studies, linear decision combination or aggregation method provides comparable or better performances than the other combination methods. However, performance evaluations of the stacked generalization methods reported in the literature are not consistent with each other. This fact is demonstrated by Dzeroski and Zenko in @cite_19 where they employ heterogeneous base-layer classifiers in their stacked generalization architecture. They report that their results contradict with the observations of the studies in the literature on SG. The contradictory results can be attributed to many non-linear relations among the parameters of the SG, such as the number and the structure of base-layer and meta-layer classifiers, and their feature, decision and fusion spaces.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"2023294425"
],
"abstract": [
"We empirically evaluate several state-of-the-art methods for constructing ensembles of heterogeneous classifiers with stacking and show that they perform (at best) comparably to selecting the best classifier from the ensemble by cross validation. Among state-of-the-art stacking methods, stacking with probability distributions and multi-response linear regression performs best. We propose two extensions of this method, one using an extended set of meta-level features and the other using multi-response model trees to learn at the meta-level. We show that the latter extension performs better than existing stacking approaches and better than selecting the best classifier by cross validation."
]
}
|
1204.0171
|
1838161406
|
In this study, a new Stacked Generalization technique called Fuzzy Stacked Generalization (FSG) is proposed to minimize the difference between N -sample and large-sample classification error of the Nearest Neighbor classifier. The proposed FSG employs a new hierarchical distance learning strategy to minimize the error difference. For this purpose, we first construct an ensemble of base-layer fuzzy k- Nearest Neighbor (k-NN) classifiers, each of which receives a different feature set extracted from the same sample set. The fuzzy membership values computed at the decision space of each fuzzy k-NN classifier are concatenated to form the feature vectors of a fusion space. Finally, the feature vectors are fed to a meta-layer classifier to learn the degree of accuracy of the decisions of the base-layer classifiers for meta-layer classification. Rather than the power of the individual base layer-classifiers, diversity and cooperation of the classifiers become an important issue to improve the overall performance of the proposed FSG. A weak base-layer classifier may boost the overall performance more than a strong classifier, if it is capable of recognizing the samples, which are not recognized by the rest of the classifiers, in its own feature space. The experiments explore the type of the collaboration among the individual classifiers required for an improved performance of the suggested architecture. Experiments on multiple feature real-world datasets show that the proposed FSG performs better than the state of the art ensemble learning algorithms such as Adaboost, Random Subspace and Rotation Forest. On the other hand, compatible performances are observed in the experiments on single feature multi-attribute datasets.
|
The employment of fuzzy decisions in the ensemble learning algorithms is analyzed in @cite_33 @cite_17 @cite_38 . @cite_33 use fuzzy @math -NN algorithms as base-layer classifiers, and employ a linearly weighted voting method to combine the fuzzy decisions for Face Recognition. Cho and Kim @cite_17 combine the decisions of Neural Networks which are implemented in the base-layer classifiers using a fuzzy combination rule called fuzzy integral. Kuncheva @cite_38 experimentally compares various fuzzy and crisp combination methods, including fuzzy integral and voting, to boost the classifier performances in Adaboost. In their experimental results, the classification algorithms that implement fuzzy rules outperform the algorithms that implement crisp rules. However, the effect of the employment of fuzzy rules to the classification performance of SG is given as an open problem.
|
{
"cite_N": [
"@cite_38",
"@cite_33",
"@cite_17"
],
"mid": [
"2109726006",
"2110410904",
"2098351657"
],
"abstract": [
"Boosting is recognized as one of the most successful techniques for generating classifier ensembles. Typically, the classifier outputs are combined by the weighted majority vote. The purpose of this study is to demonstrate the advantages of some fuzzy combination methods for ensembles of classifiers designed by Boosting. We ran two-fold cross-validation experiments on six benchmark data sets to compare the fuzzy and nonfuzzy combination methods. On the \"fuzzy side\" we used the fuzzy integral and the decision templates with different similarity measures. On the \"nonfuzzy side\" we tried the weighted majority vote as well as simple combiners such as the majority vote, minimum, maximum, average, product, and the Naive-Bayes combination. In our experiments, the fuzzy combination methods performed consistently better than the nonfuzzy methods. The weighted majority vote showed a stable performance, though slightly inferior to the performance of the fuzzy combiners.",
"Most classical template-based frontal face recognition techniques assume that multiple images per person are available for training, while in many real-world applications only one training image per person is available and the test images may be partially occluded or may vary in expressions. This paper addresses those problems by extending a previous local probabilistic approach presented by Martinez, using the self-organizing map (SOM) instead of a mixture of Gaussians to learn the subspace that represented each individual. Based on the localization of the training images, two strategies of learning the SOM topological space are proposed, namely to train a single SOM map for all the samples and to train a separate SOM map for each class, respectively. A soft k nearest neighbor (soft k-NN) ensemble method, which can effectively exploit the outputs of the SOM topological space, is also proposed to identify the unlabeled subjects. Experiments show that the proposed method exhibits high robust performance against the partial occlusions and variant expressions.",
"Multiplayer feedforward networks trained by minimizing the mean squared error and by using a one of c teaching function yield network outputs that estimate posterior class probabilities. This provides a sound basis for combining the results from multiple networks to get more accurate classification. This paper presents a method for combining multiple networks based on fuzzy logic, especially the fuzzy integral. This method non-linearly combines objective evidence, in the form of a network output, with subjective evaluation of the importance of the individual neural networks. The experimental results with the recognition problem of on-line handwriting characters show that the performance of individual networks could be improved significantly. >"
]
}
|
1204.0171
|
1838161406
|
In this study, a new Stacked Generalization technique called Fuzzy Stacked Generalization (FSG) is proposed to minimize the difference between N -sample and large-sample classification error of the Nearest Neighbor classifier. The proposed FSG employs a new hierarchical distance learning strategy to minimize the error difference. For this purpose, we first construct an ensemble of base-layer fuzzy k- Nearest Neighbor (k-NN) classifiers, each of which receives a different feature set extracted from the same sample set. The fuzzy membership values computed at the decision space of each fuzzy k-NN classifier are concatenated to form the feature vectors of a fusion space. Finally, the feature vectors are fed to a meta-layer classifier to learn the degree of accuracy of the decisions of the base-layer classifiers for meta-layer classification. Rather than the power of the individual base layer-classifiers, diversity and cooperation of the classifiers become an important issue to improve the overall performance of the proposed FSG. A weak base-layer classifier may boost the overall performance more than a strong classifier, if it is capable of recognizing the samples, which are not recognized by the rest of the classifiers, in its own feature space. The experiments explore the type of the collaboration among the individual classifiers required for an improved performance of the suggested architecture. Experiments on multiple feature real-world datasets show that the proposed FSG performs better than the state of the art ensemble learning algorithms such as Adaboost, Random Subspace and Rotation Forest. On the other hand, compatible performances are observed in the experiments on single feature multi-attribute datasets.
|
In this study, most of the above mentioned intractable problems are avoided by employing a homogeneous architecture which consists of the same type of base-layer and meta-layer classifiers in a new stacked generalization architecture called Fuzzy Stacked Generalization (FSG). This architecture allows us to concatenate the output decision spaces of the base-layer classifiers, which represent consistent information about the samples. Furthermore, we model linear combination or feature space aggregation method as a feature space mapping from the base-layer output feature space (i.e. decision space) to the meta-layer input feature space (i.e. fusion space). In our proposed FSG, classification rules of base-layer classifiers are considered as the feature mappings from classifier input feature spaces to output decision spaces. In order to control these mappings for tracing the transformations of the feature vectors of samples through the layers of the architecture, homogeneous fuzzy @math -NN classifiers are used and the behavior of fuzzy decision rules is investigated in both the base-layer and the meta-layer. Moreover, employment of fuzzy @math -NN classifiers enables us to obtain information about the uncertainty of the classifier decisions and the belongingness of the samples to classes @cite_18 @cite_7 .
|
{
"cite_N": [
"@cite_18",
"@cite_7"
],
"mid": [
"2095727900",
"2104869141"
],
"abstract": [
"A fuzzy neural network model based on the multilayer perceptron, using the backpropagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and other related models. >",
"Uncertainty arises in classification problems when the input pattern is not perfect or measurement error is unavoidable. In many applications, it would be beneficial to obtain an estimate of the uncertainty associated with a new observation and its membership within a particular class. Although statistical classification techniques base decision boundaries according to the probability distributions of the patterns belonging to each class, they are poor at supplying uncertainty information for new observations. Previous research has documented a multiarchitecture, monotonic function neural network model for the representation of uncertainty associated with a new observation for two-class classification. This paper proposes a modification to the monotonic function model to estimate the uncertainty associated with a new observation for multiclass classification. The model, therefore, overcomes a limitation of traditional classifiers that base decisions on sharp classification boundaries. As such, it is believed that this method will have advantages for applications such as biometric recognition in which the estimation of classification uncertainty is an important issue. This approach is based on the transformation of the input pattern vector relative to each classification class. Separate, monotonic, single-output neural networks are then used to represent the \"degree-of-similarity\" between each input pattern vector and each class. An algorithm for the implementation of this approach is proposed and tested with publicly available face-recognition data sets. The results indicate that the suggested approach provides similar classification performance to conventional principle component analysis (PCA) and linear discriminant analysis (LDA) techniques for multiclass pattern recognition problems as well as providing uncertainty information caused by misclassification"
]
}
|
1204.0171
|
1838161406
|
In this study, a new Stacked Generalization technique called Fuzzy Stacked Generalization (FSG) is proposed to minimize the difference between N -sample and large-sample classification error of the Nearest Neighbor classifier. The proposed FSG employs a new hierarchical distance learning strategy to minimize the error difference. For this purpose, we first construct an ensemble of base-layer fuzzy k- Nearest Neighbor (k-NN) classifiers, each of which receives a different feature set extracted from the same sample set. The fuzzy membership values computed at the decision space of each fuzzy k-NN classifier are concatenated to form the feature vectors of a fusion space. Finally, the feature vectors are fed to a meta-layer classifier to learn the degree of accuracy of the decisions of the base-layer classifiers for meta-layer classification. Rather than the power of the individual base layer-classifiers, diversity and cooperation of the classifiers become an important issue to improve the overall performance of the proposed FSG. A weak base-layer classifier may boost the overall performance more than a strong classifier, if it is capable of recognizing the samples, which are not recognized by the rest of the classifiers, in its own feature space. The experiments explore the type of the collaboration among the individual classifiers required for an improved performance of the suggested architecture. Experiments on multiple feature real-world datasets show that the proposed FSG performs better than the state of the art ensemble learning algorithms such as Adaboost, Random Subspace and Rotation Forest. On the other hand, compatible performances are observed in the experiments on single feature multi-attribute datasets.
|
We analyze the classification error of a nearest neighbor classifier in two parts, namely @math ) @math -sample error which is the error of a classifier employed on a training dataset of @math samples and @math ) large-sample error which is the error of a classifier employed on a training dataset of large number of samples such that @math . A distance learning approach proposed by Short and Fukunaga @cite_15 is used in a hierarchical FSG architecture from Decision Fusion perspective for the minimization of the error difference between @math -sample and large-sample error. In the literature, distance learning methods have been employed using prototype @cite_29 @cite_24 @cite_46 @cite_40 and feature selection @cite_37 or weighting @cite_31 methods by computing the weights associated to samples and feature vectors, respectively. The computed weights are used to linearly transform feature spaces of classifiers to more discriminative feature spaces @cite_12 @cite_43 @cite_20 in order to decrease large-sample classification error of the classifiers @cite_8 . A detailed literature review of prototype selection and distance learning methods for nearest neighbor classification is given in @cite_40 .
|
{
"cite_N": [
"@cite_37",
"@cite_31",
"@cite_8",
"@cite_46",
"@cite_29",
"@cite_24",
"@cite_43",
"@cite_40",
"@cite_15",
"@cite_20",
"@cite_12"
],
"mid": [
"2149992809",
"2004811928",
"2103705607",
"2099085654",
"2097757574",
"2142339769",
"2106053110",
"2151537585",
"2088779313",
"2003677307",
"2144935315"
],
"abstract": [
"The distance metric is the corner stone of nearest neighbor (NN)-based methods, and therefore, of nearest prototype (NP) algorithms. That is because they classify depending on the similarity of the data. When the data is characterized by a set of features which may contribute to the classification task in different levels, feature weighting or selection is required, sometimes in a local sense. However, local weighting is typically restricted to NN approaches. In this paper, we introduce local feature weighting (LFW) in NP classification. LFW provides each prototype its own weight vector, opposite to typical global weighting methods found in the NP literature, where all the prototypes share the same one. Providing each prototype its own weight vector has a novel effect in the borders of the Voronoi regions generated: They become nonlinear. We have integrated LFW with a previously developed evolutionary nearest prototype classifier (ENPC). The experiments performed both in artificial and real data sets demonstrate that the resulting algorithm that we call LFW in nearest prototype classification (LFW-NPC) avoids overfitting on training data in domains where the features may have different contribution to the classification task in different areas of the feature space. This generalization capability is also reflected in automatically obtaining an accurate and reduced set of prototypes.",
"Cooperative coevolution is a successful trend of evolutionary computation which allows us to define partitions of the domain of a given problem, or to integrate several related techniques into one, by the use of evolutionary algorithms. It is possible to apply it to the development of advanced classification methods, which integrate several machine learning techniques into a single proposal. A novel approach integrating instance selection, instance weighting, and feature weighting into the framework of a coevolutionary model is presented in this paper. We compare it with a wide range of evolutionary and nonevolutionary related methods, in order to show the benefits of the employment of coevolution to apply the techniques considered simultaneously. The results obtained, contrasted through nonparametric statistical tests, show that our proposal outperforms other methods in the comparison, thus becoming a suitable tool in the task of enhancing the nearest neighbor classifier.",
"In order to optimize the accuracy of the nearest-neighbor classification rule, a weighted distance is proposed, along with algorithms to automatically learn the corresponding weights. These weights may be specific for each class and feature, for each individual prototype, or for both. The learning algorithms are derived by (approximately) minimizing the leaving-one-out classification error of the given training set. The proposed approach is assessed through a series of experiments with UCI STATLOG corpora, as well as with a more specific task of text classification which entails very sparse data representation and huge dimensionality. In all these experiments, the proposed approach shows a uniformly good behavior, with results comparable to or better than state-of-the-art results published with the same data so far",
"Pattern selection methods have been traditionally developed with a dependency on a specific classifier. In contrast, this paper presents a method that selects critical patterns deemed to carry essential information applicable to train those types of classifiers which require spatial information of the training data set. Critical patterns include those edge patterns that define the boundary and those border patterns that separate classes. The proposed method selects patterns from a new perspective, primarily based on their location in input space. It determines class edge patterns with the assistance of the approximated tangent hyperplane of a class surface. It also identifies border patterns between classes using local probability. The proposed method is evaluated on benchmark problems using popular classifiers, including multilayer perceptrons, radial basis functions, support vector machines, and nearest neighbors. The proposed approach is also compared with four state-of-the-art approaches and it is shown to provide similar but more consistent accuracy from a reduced data set. Experimental results demonstrate that it selects patterns sufficient to represent class boundary and to preserve the decision surface.",
"This paper presents a relational framework for studying properties of labeled data points related to proximity and labeling information in order to improve the performance of the 1NN rule. Specifically, the class conditional nearest neighbor (ccnn) relation over pairs of points in a labeled training set is introduced. For a given class label c, this relation associates to each point a its nearest neighbor computed among only those points with class label c (excluded a). A characterization of ccnn in terms of two graphs is given. These graphs are used for defining a novel scoring function over instances by means of an information-theoretic divergence measure applied to the degree distributions of these graphs. The scoring function is employed to develop an effective large margin instance selection method, which is empirically demonstrated to improve storage and accuracy performance of the 1NN rule on artificial and real-life data sets.",
"In supervised learning, a training set consisting of labeled instances is used by a learning algorithm for generating a model (classifier) that is subsequently employed for deciding the class label of new instances (for generalization). Characteristics of the training set, such as presence of noisy instances and size, influence the learning algorithm and affect generalization performance. This paper introduces a new network-based representation of a training set, called hit miss network (HMN), which provides a compact description of the nearest neighbor relation over pairs of instances from each pair of classes. We show that structural properties of HMN's correspond to properties of training points related to the one nearest neighbor (1-NN) decision rule, such as being border or central point. This motivates us to use HMN's for improving the performance of a 1-NN, classifier by removing instances from the training set (instance selection). We introduce three new HMN-based algorithms for instance selection. HMN-C, which removes instances without affecting accuracy of 1-NN on the original training set, HMN-E, based on a more aggressive storage reduction, and HMN-EI, which applies iteratively HMN-E. Their performance is assessed on 22 data sets with different characteristics, such as input dimension, cardinality, class balance, number of classes, noise content, and presence of redundant variables. Results of experiments on these data sets show that accuracy of 1-NN classifier increases significantly when HMN-EI is applied. Comparison with state-of-the-art editing algorithms for instance selection on these data sets indicates best generalization performance of HMN-EI and no significant difference in storage requirements. In general, these results indicate that HMN's provide a powerful graph-based representation of a training set, which can be successfully applied for performing noise and redundance reduction in instance-based learning.",
"The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.",
"The nearest neighbor classifier is one of the most used and well-known techniques for performing recognition tasks. It has also demonstrated itself to be one of the most useful algorithms in data mining in spite of its simplicity. However, the nearest neighbor classifier suffers from several drawbacks such as high storage requirements, low efficiency in classification response, and low noise tolerance. These weaknesses have been the subject of study for many researchers and many solutions have been proposed. Among them, one of the most promising solutions consists of reducing the data used for establishing a classification rule (training data) by means of selecting relevant prototypes. Many prototype selection methods exist in the literature and the research in this area is still advancing. Different properties could be observed in the definition of them, but no formal categorization has been established yet. This paper provides a survey of the prototype selection methods proposed in the literature from a theoretical and empirical point of view. Considering a theoretical point of view, we propose a taxonomy based on the main characteristics presented in prototype selection and we analyze their advantages and drawbacks. Empirically, we conduct an experimental study involving different sizes of data sets for measuring their performance in terms of accuracy, reduction capabilities, and runtime. The results obtained by all the methods studied have been verified by nonparametric statistical tests. Several remarks, guidelines, and recommendations are made for the use of prototype selection for nearest neighbor classification.",
"A local distance measure is shown to optimize the performance of the nearest neighbor two-class classifier for a finite number of samples. The difference between the finite sample error and the asymptotic error is used as the criterion of improvement. This new distance measure is compared to the well-known Euclidean distance. An algorithm for practical implementation is introduced. This algorithm is shown to be computationally competitive with the present nearest neighbor procedures and is illustrated experimentally. A closed form for the corresponding second-order moment of this criterion is found. Finally, the above results are extended to",
"We describe and analyze an online algorithm for supervised learning of pseudo-metrics. The algorithm receives pairs of instances and predicts their similarity according to a pseudo-metric. The pseudo-metrics we use are quadratic forms parameterized by positive semi-definite matrices. The core of the algorithm is an update rule that is based on successive projections onto the positive semi-definite cone and onto half-space constraints imposed by the examples. We describe an efficient procedure for performing these projections, derive a worst case mistake bound on the similarity predictions, and discuss a dual version of the algorithm in which it is simple to incorporate kernel operators. The online algorithm also serves as a building block for deriving a large-margin batch algorithm. We demonstrate the merits of the proposed approach by conducting experiments on MNIST dataset and on document filtering.",
"In this paper we propose a novel method for learning a Mahalanobis distance measure to be used in the KNN classification algorithm. The algorithm directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. It can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization and fast classification. Unlike other methods, our classification model is non-parametric, making no assumptions about the shape of the class distributions or the boundaries between them. The performance of the method is demonstrated on several data sets, both for metric learning and linear dimensionality reduction."
]
}
|
1204.0171
|
1838161406
|
In this study, a new Stacked Generalization technique called Fuzzy Stacked Generalization (FSG) is proposed to minimize the difference between N -sample and large-sample classification error of the Nearest Neighbor classifier. The proposed FSG employs a new hierarchical distance learning strategy to minimize the error difference. For this purpose, we first construct an ensemble of base-layer fuzzy k- Nearest Neighbor (k-NN) classifiers, each of which receives a different feature set extracted from the same sample set. The fuzzy membership values computed at the decision space of each fuzzy k-NN classifier are concatenated to form the feature vectors of a fusion space. Finally, the feature vectors are fed to a meta-layer classifier to learn the degree of accuracy of the decisions of the base-layer classifiers for meta-layer classification. Rather than the power of the individual base layer-classifiers, diversity and cooperation of the classifiers become an important issue to improve the overall performance of the proposed FSG. A weak base-layer classifier may boost the overall performance more than a strong classifier, if it is capable of recognizing the samples, which are not recognized by the rest of the classifiers, in its own feature space. The experiments explore the type of the collaboration among the individual classifiers required for an improved performance of the suggested architecture. Experiments on multiple feature real-world datasets show that the proposed FSG performs better than the state of the art ensemble learning algorithms such as Adaboost, Random Subspace and Rotation Forest. On the other hand, compatible performances are observed in the experiments on single feature multi-attribute datasets.
|
There are three main differences between our proposed hierarchical distance learning method and the methods introduced in the literature @cite_29 @cite_24 @cite_46 @cite_40 @cite_37 @cite_31 @cite_12 @cite_43 @cite_20 @cite_8 : The proposed method is used for the minimization of the error difference between @math -sample and large-sample error, while the aforementioned methods @cite_29 @cite_24 @cite_46 @cite_40 @cite_37 @cite_31 @cite_12 @cite_43 @cite_20 @cite_8 consider the minimization of large-sample error. We employ a generative feature space mapping by computing the class posterior probabilities of the samples in the decision spaces and use posterior probability vectors as feature vectors in fusion spaces. On the other, the methods given in the literature @cite_29 @cite_24 @cite_46 @cite_40 @cite_37 @cite_31 @cite_12 @cite_43 @cite_20 @cite_8 use discriminative approaches by just transforming the input feature spaces to more discriminative input feature spaces.
|
{
"cite_N": [
"@cite_37",
"@cite_31",
"@cite_8",
"@cite_29",
"@cite_24",
"@cite_43",
"@cite_40",
"@cite_46",
"@cite_20",
"@cite_12"
],
"mid": [
"2149992809",
"2004811928",
"2103705607",
"2097757574",
"2142339769",
"2106053110",
"2151537585",
"2099085654",
"2003677307",
"2144935315"
],
"abstract": [
"The distance metric is the corner stone of nearest neighbor (NN)-based methods, and therefore, of nearest prototype (NP) algorithms. That is because they classify depending on the similarity of the data. When the data is characterized by a set of features which may contribute to the classification task in different levels, feature weighting or selection is required, sometimes in a local sense. However, local weighting is typically restricted to NN approaches. In this paper, we introduce local feature weighting (LFW) in NP classification. LFW provides each prototype its own weight vector, opposite to typical global weighting methods found in the NP literature, where all the prototypes share the same one. Providing each prototype its own weight vector has a novel effect in the borders of the Voronoi regions generated: They become nonlinear. We have integrated LFW with a previously developed evolutionary nearest prototype classifier (ENPC). The experiments performed both in artificial and real data sets demonstrate that the resulting algorithm that we call LFW in nearest prototype classification (LFW-NPC) avoids overfitting on training data in domains where the features may have different contribution to the classification task in different areas of the feature space. This generalization capability is also reflected in automatically obtaining an accurate and reduced set of prototypes.",
"Cooperative coevolution is a successful trend of evolutionary computation which allows us to define partitions of the domain of a given problem, or to integrate several related techniques into one, by the use of evolutionary algorithms. It is possible to apply it to the development of advanced classification methods, which integrate several machine learning techniques into a single proposal. A novel approach integrating instance selection, instance weighting, and feature weighting into the framework of a coevolutionary model is presented in this paper. We compare it with a wide range of evolutionary and nonevolutionary related methods, in order to show the benefits of the employment of coevolution to apply the techniques considered simultaneously. The results obtained, contrasted through nonparametric statistical tests, show that our proposal outperforms other methods in the comparison, thus becoming a suitable tool in the task of enhancing the nearest neighbor classifier.",
"In order to optimize the accuracy of the nearest-neighbor classification rule, a weighted distance is proposed, along with algorithms to automatically learn the corresponding weights. These weights may be specific for each class and feature, for each individual prototype, or for both. The learning algorithms are derived by (approximately) minimizing the leaving-one-out classification error of the given training set. The proposed approach is assessed through a series of experiments with UCI STATLOG corpora, as well as with a more specific task of text classification which entails very sparse data representation and huge dimensionality. In all these experiments, the proposed approach shows a uniformly good behavior, with results comparable to or better than state-of-the-art results published with the same data so far",
"This paper presents a relational framework for studying properties of labeled data points related to proximity and labeling information in order to improve the performance of the 1NN rule. Specifically, the class conditional nearest neighbor (ccnn) relation over pairs of points in a labeled training set is introduced. For a given class label c, this relation associates to each point a its nearest neighbor computed among only those points with class label c (excluded a). A characterization of ccnn in terms of two graphs is given. These graphs are used for defining a novel scoring function over instances by means of an information-theoretic divergence measure applied to the degree distributions of these graphs. The scoring function is employed to develop an effective large margin instance selection method, which is empirically demonstrated to improve storage and accuracy performance of the 1NN rule on artificial and real-life data sets.",
"In supervised learning, a training set consisting of labeled instances is used by a learning algorithm for generating a model (classifier) that is subsequently employed for deciding the class label of new instances (for generalization). Characteristics of the training set, such as presence of noisy instances and size, influence the learning algorithm and affect generalization performance. This paper introduces a new network-based representation of a training set, called hit miss network (HMN), which provides a compact description of the nearest neighbor relation over pairs of instances from each pair of classes. We show that structural properties of HMN's correspond to properties of training points related to the one nearest neighbor (1-NN) decision rule, such as being border or central point. This motivates us to use HMN's for improving the performance of a 1-NN, classifier by removing instances from the training set (instance selection). We introduce three new HMN-based algorithms for instance selection. HMN-C, which removes instances without affecting accuracy of 1-NN on the original training set, HMN-E, based on a more aggressive storage reduction, and HMN-EI, which applies iteratively HMN-E. Their performance is assessed on 22 data sets with different characteristics, such as input dimension, cardinality, class balance, number of classes, noise content, and presence of redundant variables. Results of experiments on these data sets show that accuracy of 1-NN classifier increases significantly when HMN-EI is applied. Comparison with state-of-the-art editing algorithms for instance selection on these data sets indicates best generalization performance of HMN-EI and no significant difference in storage requirements. In general, these results indicate that HMN's provide a powerful graph-based representation of a training set, which can be successfully applied for performing noise and redundance reduction in instance-based learning.",
"The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.",
"The nearest neighbor classifier is one of the most used and well-known techniques for performing recognition tasks. It has also demonstrated itself to be one of the most useful algorithms in data mining in spite of its simplicity. However, the nearest neighbor classifier suffers from several drawbacks such as high storage requirements, low efficiency in classification response, and low noise tolerance. These weaknesses have been the subject of study for many researchers and many solutions have been proposed. Among them, one of the most promising solutions consists of reducing the data used for establishing a classification rule (training data) by means of selecting relevant prototypes. Many prototype selection methods exist in the literature and the research in this area is still advancing. Different properties could be observed in the definition of them, but no formal categorization has been established yet. This paper provides a survey of the prototype selection methods proposed in the literature from a theoretical and empirical point of view. Considering a theoretical point of view, we propose a taxonomy based on the main characteristics presented in prototype selection and we analyze their advantages and drawbacks. Empirically, we conduct an experimental study involving different sizes of data sets for measuring their performance in terms of accuracy, reduction capabilities, and runtime. The results obtained by all the methods studied have been verified by nonparametric statistical tests. Several remarks, guidelines, and recommendations are made for the use of prototype selection for nearest neighbor classification.",
"Pattern selection methods have been traditionally developed with a dependency on a specific classifier. In contrast, this paper presents a method that selects critical patterns deemed to carry essential information applicable to train those types of classifiers which require spatial information of the training data set. Critical patterns include those edge patterns that define the boundary and those border patterns that separate classes. The proposed method selects patterns from a new perspective, primarily based on their location in input space. It determines class edge patterns with the assistance of the approximated tangent hyperplane of a class surface. It also identifies border patterns between classes using local probability. The proposed method is evaluated on benchmark problems using popular classifiers, including multilayer perceptrons, radial basis functions, support vector machines, and nearest neighbors. The proposed approach is also compared with four state-of-the-art approaches and it is shown to provide similar but more consistent accuracy from a reduced data set. Experimental results demonstrate that it selects patterns sufficient to represent class boundary and to preserve the decision surface.",
"We describe and analyze an online algorithm for supervised learning of pseudo-metrics. The algorithm receives pairs of instances and predicts their similarity according to a pseudo-metric. The pseudo-metrics we use are quadratic forms parameterized by positive semi-definite matrices. The core of the algorithm is an update rule that is based on successive projections onto the positive semi-definite cone and onto half-space constraints imposed by the examples. We describe an efficient procedure for performing these projections, derive a worst case mistake bound on the similarity predictions, and discuss a dual version of the algorithm in which it is simple to incorporate kernel operators. The online algorithm also serves as a building block for deriving a large-margin batch algorithm. We demonstrate the merits of the proposed approach by conducting experiments on MNIST dataset and on document filtering.",
"In this paper we propose a novel method for learning a Mahalanobis distance measure to be used in the KNN classification algorithm. The algorithm directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. It can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization and fast classification. Unlike other methods, our classification model is non-parametric, making no assumptions about the shape of the class distributions or the boundaries between them. The performance of the method is demonstrated on several data sets, both for metric learning and linear dimensionality reduction."
]
}
|
1204.0171
|
1838161406
|
In this study, a new Stacked Generalization technique called Fuzzy Stacked Generalization (FSG) is proposed to minimize the difference between N -sample and large-sample classification error of the Nearest Neighbor classifier. The proposed FSG employs a new hierarchical distance learning strategy to minimize the error difference. For this purpose, we first construct an ensemble of base-layer fuzzy k- Nearest Neighbor (k-NN) classifiers, each of which receives a different feature set extracted from the same sample set. The fuzzy membership values computed at the decision space of each fuzzy k-NN classifier are concatenated to form the feature vectors of a fusion space. Finally, the feature vectors are fed to a meta-layer classifier to learn the degree of accuracy of the decisions of the base-layer classifiers for meta-layer classification. Rather than the power of the individual base layer-classifiers, diversity and cooperation of the classifiers become an important issue to improve the overall performance of the proposed FSG. A weak base-layer classifier may boost the overall performance more than a strong classifier, if it is capable of recognizing the samples, which are not recognized by the rest of the classifiers, in its own feature space. The experiments explore the type of the collaboration among the individual classifiers required for an improved performance of the suggested architecture. Experiments on multiple feature real-world datasets show that the proposed FSG performs better than the state of the art ensemble learning algorithms such as Adaboost, Random Subspace and Rotation Forest. On the other hand, compatible performances are observed in the experiments on single feature multi-attribute datasets.
|
The aforementioned methods, including the method of Short and Fukunaga @cite_15 , employ distance learning methods in a single classifier. On the other hand, we employ a hierarchical ensemble learning approach for distance learning. Therefore, different feature space mappings can be employed in different classifiers in the ensemble, which enables us more control on the feature space transformations than a single feature transformation in a single classifier.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2088779313"
],
"abstract": [
"A local distance measure is shown to optimize the performance of the nearest neighbor two-class classifier for a finite number of samples. The difference between the finite sample error and the asymptotic error is used as the criterion of improvement. This new distance measure is compared to the well-known Euclidean distance. An algorithm for practical implementation is introduced. This algorithm is shown to be computationally competitive with the present nearest neighbor procedures and is illustrated experimentally. A closed form for the corresponding second-order moment of this criterion is found. Finally, the above results are extended to"
]
}
|
1204.0171
|
1838161406
|
In this study, a new Stacked Generalization technique called Fuzzy Stacked Generalization (FSG) is proposed to minimize the difference between N -sample and large-sample classification error of the Nearest Neighbor classifier. The proposed FSG employs a new hierarchical distance learning strategy to minimize the error difference. For this purpose, we first construct an ensemble of base-layer fuzzy k- Nearest Neighbor (k-NN) classifiers, each of which receives a different feature set extracted from the same sample set. The fuzzy membership values computed at the decision space of each fuzzy k-NN classifier are concatenated to form the feature vectors of a fusion space. Finally, the feature vectors are fed to a meta-layer classifier to learn the degree of accuracy of the decisions of the base-layer classifiers for meta-layer classification. Rather than the power of the individual base layer-classifiers, diversity and cooperation of the classifiers become an important issue to improve the overall performance of the proposed FSG. A weak base-layer classifier may boost the overall performance more than a strong classifier, if it is capable of recognizing the samples, which are not recognized by the rest of the classifiers, in its own feature space. The experiments explore the type of the collaboration among the individual classifiers required for an improved performance of the suggested architecture. Experiments on multiple feature real-world datasets show that the proposed FSG performs better than the state of the art ensemble learning algorithms such as Adaboost, Random Subspace and Rotation Forest. On the other hand, compatible performances are observed in the experiments on single feature multi-attribute datasets.
|
In Section , we define the problem of minimizing the error difference between @math -sample and large-sample error in a single classifier. Then, we introduce the distance learning approach for an ensemble of classifiers considering the distance learning problem as a decision space design problem in Section . Employment of the proposed hierarchical distance learning approach in the FSG and its algorithmic description is given in Section . We discuss expertise of base-layer classifiers and the dimensionality problems of the feature spaces in FSG, and its computational complexity in Section . In order to compare the proposed FSG with the state of the art ensemble learning algorithms, we have implemented Adaboost, Random Subspace and Rotation Forest in the experimental analysis in Section . Moreover, we have used the same multi-attribute benchmark datasets with the same data splitting given in @cite_29 @cite_24 to compare the performance of the proposed hierarchical distance learning approach with that of the aforementioned distance learning methods. Since the classification performances of these distance learning methods are analyzed in @cite_29 @cite_24 in detail, we do not reproduce these results in Section and refer the reader to @cite_29 @cite_24 .
|
{
"cite_N": [
"@cite_24",
"@cite_29"
],
"mid": [
"2142339769",
"2097757574"
],
"abstract": [
"In supervised learning, a training set consisting of labeled instances is used by a learning algorithm for generating a model (classifier) that is subsequently employed for deciding the class label of new instances (for generalization). Characteristics of the training set, such as presence of noisy instances and size, influence the learning algorithm and affect generalization performance. This paper introduces a new network-based representation of a training set, called hit miss network (HMN), which provides a compact description of the nearest neighbor relation over pairs of instances from each pair of classes. We show that structural properties of HMN's correspond to properties of training points related to the one nearest neighbor (1-NN) decision rule, such as being border or central point. This motivates us to use HMN's for improving the performance of a 1-NN, classifier by removing instances from the training set (instance selection). We introduce three new HMN-based algorithms for instance selection. HMN-C, which removes instances without affecting accuracy of 1-NN on the original training set, HMN-E, based on a more aggressive storage reduction, and HMN-EI, which applies iteratively HMN-E. Their performance is assessed on 22 data sets with different characteristics, such as input dimension, cardinality, class balance, number of classes, noise content, and presence of redundant variables. Results of experiments on these data sets show that accuracy of 1-NN classifier increases significantly when HMN-EI is applied. Comparison with state-of-the-art editing algorithms for instance selection on these data sets indicates best generalization performance of HMN-EI and no significant difference in storage requirements. In general, these results indicate that HMN's provide a powerful graph-based representation of a training set, which can be successfully applied for performing noise and redundance reduction in instance-based learning.",
"This paper presents a relational framework for studying properties of labeled data points related to proximity and labeling information in order to improve the performance of the 1NN rule. Specifically, the class conditional nearest neighbor (ccnn) relation over pairs of points in a labeled training set is introduced. For a given class label c, this relation associates to each point a its nearest neighbor computed among only those points with class label c (excluded a). A characterization of ccnn in terms of two graphs is given. These graphs are used for defining a novel scoring function over instances by means of an information-theoretic divergence measure applied to the degree distributions of these graphs. The scoring function is employed to develop an effective large margin instance selection method, which is empirically demonstrated to improve storage and accuracy performance of the 1NN rule on artificial and real-life data sets."
]
}
|
1204.0156
|
2953315443
|
The increasing popularity of Twitter and other microblogs makes improved trustworthiness and relevance assessment of microblogs evermore important. We propose a method of ranking of tweets considering trustworthiness and content based popularity. The analysis of trustworthiness and popularity exploits the implicit relationships between the tweets. We model microblog ecosystem as a three-layer graph consisting of : (i) users (ii) tweets and (iii) web pages. We propose to derive trust and popularity scores of entities in these three layers, and propagate the scores to tweets considering the inter-layer relations. Our preliminary evaluations show improvement in precision and trustworthiness over the baseline methods and acceptable computation timings.
|
Ranking of tweets considering only relevance is researched extensively @cite_6 @cite_1 @cite_5 . Unlike our paper, these ranking approaches do not consider the trustworthiness.
|
{
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_6"
],
"mid": [
"2083205809",
"1561650654",
""
],
"abstract": [
"Ranking microblogs, such as tweets, as search results for a query is challenging, among other things because of the sheer amount of microblogs that are being generated in real time, as well as the short length of each individual microblog. In this paper, we describe several new strategies for ranking microblogs in a real-time search engine. Evaluating these ranking strategies is non-trivial due to the lack of a publicly available ground truth validation dataset. We have therefore developed a framework to obtain such validation data, as well as evaluation measures to assess the accuracy of the proposed ranking strategies. Our experiments demonstrate that it is beneficial for microblog search engines to take into account social network properties of the authors of microblogs in addition to properties of the microblog itself.",
"Twitter, as one of the most popular micro-blogging services, provides large quantities of fresh information including real-time news, comments, conversation, pointless babble and advertisements. Twitter presents tweets in chronological order. Recently, Twitter introduced a new ranking strategy that considers popularity of tweets in terms of number of retweets. This ranking method, however, has not taken into account content relevance or the twitter account. Therefore a large amount of pointless tweets inevitably flood the relevant tweets. This paper proposes a new ranking strategy which uses not only the content relevance of a tweet, but also the account authority and tweet-specific features such as whether a URL link is included in the tweet. We employ learning to rank algorithms to determine the best set of features with a series of experiments. It is demonstrated that whether a tweet contains URL or not, length of tweet and account authority are the best conjunction.",
""
]
}
|
1204.0156
|
2953315443
|
The increasing popularity of Twitter and other microblogs makes improved trustworthiness and relevance assessment of microblogs evermore important. We propose a method of ranking of tweets considering trustworthiness and content based popularity. The analysis of trustworthiness and popularity exploits the implicit relationships between the tweets. We model microblog ecosystem as a three-layer graph consisting of : (i) users (ii) tweets and (iii) web pages. We propose to derive trust and popularity scores of entities in these three layers, and propagate the scores to tweets considering the inter-layer relations. Our preliminary evaluations show improvement in precision and trustworthiness over the baseline methods and acceptable computation timings.
|
Credibility analysis of Twitter stories have been attempted by Castillo @cite_9 . The work tries to classify Twitter story threads as credible or incredible. Our problem is different, since we try to assess the credibility of individual tweets. As the feature space is much smaller for an individual tweet---compared the Twitter story threads---the problem becomes harder.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2084591134"
],
"abstract": [
"We analyze the information credibility of news propagated through Twitter, a popular microblogging service. Previous research has shown that most of the messages posted on Twitter are truthful, but the service is also used to spread misinformation and false rumors, often unintentionally. On this paper we focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, we analyze microblog postings related to \"trending\" topics, and classify them as credible or not credible, based on features extracted from them. We use features from users' posting and re-posting (\"re-tweeting\") behavior, from the text of the posts, and from citations to external sources. We evaluate our methods using a significant number of human assessments about the credibility of items on a recent sample of Twitter postings. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70 to 80 ."
]
}
|
1204.0156
|
2953315443
|
The increasing popularity of Twitter and other microblogs makes improved trustworthiness and relevance assessment of microblogs evermore important. We propose a method of ranking of tweets considering trustworthiness and content based popularity. The analysis of trustworthiness and popularity exploits the implicit relationships between the tweets. We model microblog ecosystem as a three-layer graph consisting of : (i) users (ii) tweets and (iii) web pages. We propose to derive trust and popularity scores of entities in these three layers, and propagate the scores to tweets considering the inter-layer relations. Our preliminary evaluations show improvement in precision and trustworthiness over the baseline methods and acceptable computation timings.
|
Finding relevant and trustworthy results based on implicit and explicit network structures have been considered previously @cite_10 @cite_13 . Real time web search considering tweet ranking has been attempted @cite_7 @cite_11 . We consider the inverse approach of considering the web page prestige to improve the ranking of the tweets. To the best of our knowledge, ranking of tweets considering trust and content popularity has not been attempted.
|
{
"cite_N": [
"@cite_13",
"@cite_10",
"@cite_7",
"@cite_11"
],
"mid": [
"2166763895",
"1979698145",
"",
"2167355161"
],
"abstract": [
"One immediate challenge in searching the deep web databases is source selection - i.e. selecting the most relevant web databases for answering a given query. The existing database selection methods (both text and relational) assess the source quality based on the query-similarity-based relevance assessment. When applied to the deep web these methods have two deficiencies. First is that the methods are agnostic to the correctness (trustworthiness) of the sources. Secondly, the query based relevance does not consider the importance of the results. These two considerations are essential for the open collections like the deep web. Since a number of sources provide answers to any query, we conjuncture that the agreements between these answers are likely to be helpful in assessing the importance and the trustworthiness of the sources. We compute the agreement between the sources as the agreement of the answers returned. While computing the agreement, we also measure and compensate for possible collusion between the sources. This adjusted agreement is modeled as a graph with sources at the vertices. On this agreement graph, a quality score of a source that we call SourceRank, is calculated as the stationary visit probability of a random walk. We evaluate SourceRank in multiple domains, including sources in Google Base, with sizes up to 675 sources. We demonstrate that the SourceRank tracks source corruption. Further, our relevance evaluations show that SourceRank improves precision by 22-60 over the Google Base and the other baseline methods. SourceRank has been implemented in a system called Factal.",
"Different information sources publish information with different degrees of correctness and originality. False information can often result in considerable damage. Hence, trustworthinessof information is an important issue in this datadriven world economy. Reputation of different agents in a network has been studied earlier in a variety of domains like e-commerce, social sciences, sensor networks, and P2P networks. Recently there has been work in the data mining community on performing trust analysis based on the data provided by multiple information providers for different objects, and such agents and their provided information about data objects form a multi-typed heterogeneous network. The trust analysis under such a framework is considered as heterogeneous network-based trust analysis. This paper will survey heterogeneous network-based trust analysis models and their applications. We would conclude with a summary and some thoughts on future research in the area.",
"",
"Realtime web search refers to the retrieval of very fresh content which is in high demand. An effective portal web search engine must support a variety of search needs, including realtime web search. However, supporting realtime web search introduces two challenges not encountered in non-realtime web search: quickly crawling relevant content and ranking documents with impoverished link and click information. In this paper, we advocate the use of realtime micro-blogging data for addressing both of these problems. We propose a method to use the micro-blogging data stream to detect fresh URLs. We also use micro-blogging data to compute novel and effective features for ranking fresh URLs. We demonstrate these methods improve effective of the portal web search engine for realtime web search."
]
}
|
1204.0447
|
2100374051
|
Not only the free web is victim to China’s excessive censorship, but also the Tor anonymity network: the Great Firewall of China prevents thousands of potential Tor users from accessing the network ...
|
Wilde was able to narrow down the suspected cause for active scanning to the cipher list sent by the Tor client inside the TLS client hello The TLS client hello is sent by the client after a TCP connection has been established. Details can be found in the Tor design paper @cite_29 . . This cipher list appears to be unique and only used by Tor. That gives the GFC the opportunity to easily identify Tor connections. Furthermore, Wilde noticed that active scanning is done at multiples of 15 minutes. The GFC launches several scanners to connect to the bridge at the next full 15 minute multiple when a Tor cipher list was detected. An analysis of the Tor debug logs yielded that Chinese scanners initiate a TLS connection, conduct a renegotiation and start building a Tor circuit, once the TLS connection was set up. After the scan succeeded, the IP address together with the associated port (we now refer to this as IP:port tuple'') of the freshly scanned bridge is blocked resulting in Chinese users not being able to use the bridge anymore.
|
{
"cite_N": [
"@cite_29"
],
"mid": [
"1655958391"
],
"abstract": [
"We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design by adding perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for location-hidden services via rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than 30 nodes. We close with a list of open problems in anonymous communication."
]
}
|
1204.0354
|
1966006953
|
Identifying the infection sources in a network, including the index cases that introduce a contagious disease into a population network, the servers that inject a computer virus into a computer network, or the individuals who started a rumor in a social network, plays a critical role in limiting the damage caused by the infection through timely quarantine of the sources. We consider the problem of estimating the infection sources and the infection regions (subsets of nodes infected by each source) in a network, based only on knowledge of which nodes are infected and their connections, and when the number of sources is unknown a priori. We derive estimators for the infection sources and their infection regions based on approximations of the infection sequences count. We prove that if there are at most two infection sources in a geometric tree, our estimator identifies the true source or sources with probability going to one as the number of infected nodes increases. When there are more than two infection sources, and when the maximum possible number of infection sources is known, we propose an algorithm with quadratic complexity to estimate the actual number and identities of the infection sources. Simulations on various kinds of networks, including tree networks, small-world networks and real world power grid networks, and tests on two real data sets are provided to verify the performance of our estimators.
|
In many applications, there may be more than one infection source in the network. For example, an infectious disease may be brought into a country through multiple individuals. Multiple individuals may collude in spreading a rumor or malicious piece of information in a social network. In this paper, we investigate the case where there may be multiple infection sources, and when the number of infection sources is unknown a priori. We also consider the problem of estimating the infection region of each source, and show that a direct application of the algorithm in @cite_30 performs significantly worse than our proposed algorithms if there are more than one infection sources. We also note that @cite_30 provides theoretical performance measures for several classes of tree networks, which we are unable to do here except for the class of geometric trees, because of the greater complexity of our proposed algorithms. Instead, we provide simulation results to verify the performance of our algorithms.
|
{
"cite_N": [
"@cite_30"
],
"mid": [
"2111772797"
],
"abstract": [
"We provide a systematic study of the problem of finding the source of a rumor in a network. We model rumor spreading in a network with the popular susceptible-infected (SI) model and then construct an estimator for the rumor source. This estimator is based upon a novel topological quantity which we term rumor centrality. We establish that this is a maximum likelihood (ML) estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has nontrivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops of the true source across different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding rumor sources in networks which are not tree-like."
]
}
|
1203.5415
|
1674037273
|
Recommender systems require their recommendation algorithms to be accurate, scalable and should handle very sparse training data which keep changing over time. Inspired by ant colony optimization, we propose a novel collaborative filtering scheme: Ant Collaborative Filtering that enjoys those favorable characteristics above mentioned. With the mechanism of pheromone transmission between users and items, our method can pinpoint most relative users and items even in face of the sparsity problem. By virtue of the evaporation of existing pheromone, we capture the evolution of user preference over time. Meanwhile, the computation complexity is comparatively small and the incremental update can be done online. We design three experiments on three typical recommender systems, namely movie recommendation, book recommendation and music recommendation, which cover both explicit and implicit rating data. The results show that the proposed algorithm is well suited for real-world recommendation scenarios which have a high throughput and are time sensitive.
|
The most well known bipartite graph algorithm in Information Retrieval may be HITS @cite_5 proposed in 1998. It calculates the stationary status of both groups of nodes through mutual reinforcement on the bipartite graph. It is a typical random walk method on the bipartite graph. Similar works include @cite_24 and @cite_23 . @cite_23 try to find most relevant users using the user-item Bipatite graph which recommendations are generated upon. Interestingly, @cite_4 links the random walks methods and spectral based methods. Actually, spectral methods are often implemented using random walks iterations.
|
{
"cite_N": [
"@cite_24",
"@cite_5",
"@cite_4",
"@cite_23"
],
"mid": [
"2160555926",
"1981202432",
"2171009857",
"1844760852"
],
"abstract": [
"Search engines can record which documents were clicked for which query, and use these query-document pairs as \"soft\" relevance judgments. However, compared to the true judgments, click logs give noisy and sparse relevance information. We apply a Markov random walk model to a large click log, producing a probabilistic ranking of documents for a given query. A key advantage of the model is its ability to retrieve relevant documents that have not yet been clicked for that query and rank those effectively. We conduct experiments on click logs from image search, comparing our (\"backward\") random walk model to a different (\"forward\") random walk, varying parameters such as walk length and self-transition probability. The most effective combination is a long backward walk with high self-transition probability.",
"The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have eective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their eectiveness in a variety of contexts on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of \" information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of pages\" that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristics for link-based analysis.",
"We present a new view of image segmentation by pairwise similarities. We interpret the similarities as edge flows in a Markov random walk and study the eigenvalues and eigenvectors of the walk's transition matrix. This interpretation shows that spectral methods for clustering and segmentation have a probabilistic foundation. In particular, we prove that the Normalized Cut method arises naturally from our framework. Finally, the framework provides a principled method for learning the similarity function as a combination of features.",
"We present a novel framework for studying recommendation algorithms in terms of the ‘jumps’ that they make to connect people to artifacts. This approach emphasizes reachability via an algorithm within the implicit graph structure underlying a recommender dataset and allows us to consider questions relating algorithmic parameters to properties of the datasets. For instance, given a particular algorithm ‘jump,’ what is the average path length from a person to an artifact? Or, what choices of minimum ratings and jumps maintain a connected graph? We illustrate the approach with a common jump called the ‘hammock’ using movie recommender datasets."
]
}
|
1203.5415
|
1674037273
|
Recommender systems require their recommendation algorithms to be accurate, scalable and should handle very sparse training data which keep changing over time. Inspired by ant colony optimization, we propose a novel collaborative filtering scheme: Ant Collaborative Filtering that enjoys those favorable characteristics above mentioned. With the mechanism of pheromone transmission between users and items, our method can pinpoint most relative users and items even in face of the sparsity problem. By virtue of the evaporation of existing pheromone, we capture the evolution of user preference over time. Meanwhile, the computation complexity is comparatively small and the incremental update can be done online. We design three experiments on three typical recommender systems, namely movie recommendation, book recommendation and music recommendation, which cover both explicit and implicit rating data. The results show that the proposed algorithm is well suited for real-world recommendation scenarios which have a high throughput and are time sensitive.
|
Another important technique in graph mining is Activation Spreading (AS) which models the relationships among the nodes in the graph through iteratively propagation of activation value. @cite_12 surveyed several AS methods and compared them on a CF task. Their conclusion is that Hopfield net algorithm outperforms others on their book recommendation data set. @cite_21 applied AS technique on rating-based CF and proposed a novel HITS-like CF algorithm: RSM. We will compare it with our proposed ACF algorithm in the experiments. In addition, ACF can also be viewed as a AS extension in recommender systems, the major difference is that we view the Bipartite user-item relationship as a dynamic network and learn the training data incrementally.
|
{
"cite_N": [
"@cite_21",
"@cite_12"
],
"mid": [
"1877299635",
"2169757306"
],
"abstract": [
"While Spread Activation has shown its effectiveness in solving the problem of cold start and sparsity in collaborative recommendation, it will suffer a decay of performance (over activation) as the dataset grows denser. In this paper, we first introduce the concepts of Rating Similarity Matrix (RSM) and Rating Similarity Aggregation (RSA), based on which we then extend the existing spreading activation scheme to deal with both the binary (transaction) and the numeric ratings. After that, an iterative algorithm is proposed to learn RSM parameters from the observed ratings, which makes it automatically adaptive to the user similarity shown through their ratings on different items. Thus the similarity calculations tend to be more reasonable and effective. Finally, we test our method on the EachMovie dataset, the most typical benchmark for collaborative recommendation and show that our method succeeds in relieving the effect of over activation and outperforms the existing algorithms on both the sparse and dense dataset.",
"Recommender systems are being widely applied in many application settings to suggest products, services, and information items to potential consumers. Collaborative filtering, the most successful recommendation approach, makes recommendations based on past transactions and feedback from consumers sharing similar interests. A major problem limiting the usefulness of collaborative filtering is the sparsity problem, which refers to a situation in which transactional or feedback data is sparse and insufficient to identify similarities in consumer interests. In this article, we propose to deal with this sparsity problem by applying an associative retrieval framework and related spreading activation algorithms to explore transitive associations among consumers through their past transactions and feedback. Such transitive associations are a valuable source of information to help infer consumer interests and can be explored to deal with the sparsity problem. To evaluate the effectiveness of our approach, we have conducted an experimental study using a data set from an online bookstore. We experimented with three spreading activation algorithms including a constrained Leaky Capacitor algorithm, a branch-and-bound serial symbolic search algorithm, and a Hopfield net parallel relaxation search algorithm. These algorithms were compared with several collaborative filtering approaches that do not consider the transitive associations: a simple graph search approach, two variations of the user-based approach, and an item-based approach. Our experimental results indicate that spreading activation-based approaches significantly outperformed the other collaborative filtering methods as measured by recommendation precision, recall, the F-measure, and the rank score. We also observed the over-activation effect of the spreading activation approach, that is, incorporating transitive associations with past transactional data that is not sparse may \"dilute\" the data used to infer user preferences and lead to degradation in recommendation performance."
]
}
|
1203.5415
|
1674037273
|
Recommender systems require their recommendation algorithms to be accurate, scalable and should handle very sparse training data which keep changing over time. Inspired by ant colony optimization, we propose a novel collaborative filtering scheme: Ant Collaborative Filtering that enjoys those favorable characteristics above mentioned. With the mechanism of pheromone transmission between users and items, our method can pinpoint most relative users and items even in face of the sparsity problem. By virtue of the evaporation of existing pheromone, we capture the evolution of user preference over time. Meanwhile, the computation complexity is comparatively small and the incremental update can be done online. We design three experiments on three typical recommender systems, namely movie recommendation, book recommendation and music recommendation, which cover both explicit and implicit rating data. The results show that the proposed algorithm is well suited for real-world recommendation scenarios which have a high throughput and are time sensitive.
|
Model based recommendation algorithms include probabilistic methods and matrix factorization methods. The probabilistic methods assume there are some latent topics that both the users and the items belong to. The crux is to learn these probabilities that similar users and similar items are in similar topics. Probabilistic Latent Semantic Analysis (PLSA) @cite_8 is one of such algorithms. Another model-based algorithm, Non-negative Matrix Factorization (NMF) @cite_1 belongs to the matrix factorization methods that also have many extensions and applications in CF domain. By restricting the rank of the factorized matrices as the user profile and item profile, these methods are themselves the well known dimension reduction techniques and are considered to be the state-of-art of CF algorithms @cite_2 .
|
{
"cite_N": [
"@cite_2",
"@cite_1",
"@cite_8"
],
"mid": [
"2080320419",
"1479822238",
"2049455633"
],
"abstract": [
"Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"We use a low-dimensional linear model to describe the user rating matrix in a recommendation system. A non-negativity constraint is enforced in the linear model to ensure that each user’s rating profile can be represented as an additive linear combination of canonical coordinates. In order to learn such a constrained linear model from an incomplete rating matrix, we introduce two variations on Non-negative Matrix Factorization (NMF): one based on the Expectation-Maximization (EM) procedure and the other a Weighted Nonnegative Matrix Factorization (WNMF). Based on our experiments, the EM procedure converges well empirically and is less susceptible to the initial starting conditions than WNMF, but the latter is much more computationally efficient. Taking into account the advantages of both algorithms, a hybrid approach is presented and shown to be effective in real data sets. Overall, the NMF-based algorithms obtain the best prediction performance compared with other popular collaborative filtering algorithms in our experiments; the resulting linear models also contain useful patterns and features corresponding to user communities.",
"Collaborative filtering aims at learning predictive models of user preferences, interests or behavior from community data, that is, a database of available user preferences. In this article, we describe a new family of model-based algorithms designed for this task. These algorithms rely on a statistical modelling technique that introduces latent class variables in a mixture model setting to discover user communities and prototypical interest profiles. We investigate several variations to deal with discrete and continuous response variables as well as with different objective functions. The main advantages of this technique over standard memory-based methods are higher accuracy, constant time prediction, and an explicit and compact model representation. The latter can also be used to mine for user communitites. The experimental evaluation shows that substantial improvements in accucracy over existing methods and published results can be obtained."
]
}
|
1203.5155
|
1561775979
|
We consider a general class of Bayesian Games where each players utility depends on his type (possibly multidimensional) and on the strategy profile and where players' types are distributed independently. We show that if their full information version for any fixed instance of the type profile is a smooth game then the Price of Anarchy bound implied by the smoothness property, carries over to the Bayes-Nash Price of Anarchy. We show how some proofs from the literature (item bidding auctions, greedy auctions) can be cast as smoothness proofs or be simplified using smoothness. For first price item bidding with fractionally subadditive bidders we actually manage to improve by much the existing result Hassidim2011a from 4 to @math . This also shows a very interesting separation between first and second price item bidding since second price item bidding has PoA at least 2 even under complete information. For a larger class of Bayesian Games where the strategy space of a player also changes with his type we are able to show that a slightly stronger definition of smoothness also implies a Bayes-Nash PoA bound. We show how weighted congestion games actually satisfy this stronger definition of smoothness. This allows us to show that the inefficiency bounds of weighted congestion games known in the literature carry over to incomplete versions where the weights of the players are private information. We also show how an incomplete version of a natural class of monotone valid utility games, called effort market games are universally @math -smooth. Hence, we show that incomplete versions of effort market games where the abilities of the players and their budgets are private information has Bayes-Nash PoA at most 2.
|
There has been a long line of research on quantifying inefficiency of equilibria starting from @cite_4 who introduced the notion of the price of anarchy. A recent work by Roughgarden @cite_5 managed to unify several of these results under a proof framework called smoothness and also showed that such inefficiency proofs also carry over to inefficiency of coarse correlated equilibria. Moreover, he showed that such techniques give tight results for the well-studied class of congestion games. Later, @cite_0 also showed that it produces tight results for the larger class of weighted congestion games. Another recent work by Schoppman and Roughgarden @cite_6 copes with games with continuous strategy spaces and shows how the smoothness framework should be adapted for such games to produce tighter results. The introduce the new notion of local smoothness for such games and showed that if an inefficiency upper bound proof lies in this framework then it also carries over to correlated equilibria.
|
{
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_4",
"@cite_6"
],
"mid": [
"2143381018",
"2133243407",
"",
"2070394877"
],
"abstract": [
"We characterize the price of anarchy in weighted congestion games, as a function of the allowable resource cost functions. Our results provide as thorough an understanding of this quantity as is already known for nonatomic and unweighted congestion games, and take the form of universal (cost function-independent) worst-case examples. One noteworthy byproduct of our proofs is the fact that weighted congestion games are \"tight\", which implies that the worst-case price of anarchy with respect to pure Nash, mixed Nash, correlated, and coarse correlated equilibria are always equal (under mild conditions on the allowable cost functions). Another is the fact that, like nonatomic but unlike atomic (unweighted) congestion games, weighted congestion games with trivial structure already realize the worst-case POA, at least for polynomial cost functions. We also prove a new result about unweighted congestion games: the worst-case price of anarchy in symmetric games is, as the number of players goes to infinity, as large as in their more general asymmetric counterparts.",
"The price of anarchy (POA) is a worst-case measure of the inefficiency of selfish behavior, defined as the ratio of the objective function value of a worst Nash equilibrium of a game and that of an optimal outcome. This measure implicitly assumes that players successfully reach some Nash equilibrium. This drawback motivates the search for inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash and correlated equilibria; or to sequences of outcomes generated by natural experimentation strategies, such as successive best responses or simultaneous regret-minimization. We prove a general and fundamental connection between the price of anarchy and its seemingly stronger relatives in classes of games with a sum objective. First, we identify a \"canonical sufficient condition\" for an upper bound of the POA for pure Nash equilibria, which we call a smoothness argument. Second, we show that every bound derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of regret-minimizing players (or \"price of total anarchy\"). Smoothness arguments also have automatic implications for the inefficiency of approximate and Bayesian-Nash equilibria and, under mild additional assumptions, for bicriteria bounds and for polynomial-length best-response sequences. We also identify classes of games --- most notably, congestion games with cost functions restricted to an arbitrary fixed set --- that are tight, in the sense that smoothness arguments are guaranteed to produce an optimal worst-case upper bound on the POA, even for the smallest set of interest (pure Nash equilibria). Byproducts of our proof of this result include the first tight bounds on the POA in congestion games with non-polynomial cost functions, and the first structural characterization of atomic congestion games that are universal worst-case examples for the POA.",
"",
"We resolve the worst-case price of anarchy (POA) of atomic splittable congestion games. Prior to this work, no tight bounds on the POA in such games were known, even for the simplest non-trivial special case of affine cost functions. We make two distinct contributions. On the upper-bound side, we define the framework of \"local smoothness\", which refines the standard smoothness framework for games with convex strategy sets. While standard smoothness arguments cannot establish tight bounds on the POA in atomic splittable congestion games, we prove that local smoothness arguments can. Further, we prove that every POA bound derived via local smoothness applies automatically to every correlated equilibrium of the game. Unlike standard smoothness arguments, bounds proved using local smoothness do not always apply to the coarse correlated equilibria of the game. Our second contribution is a very general lower bound: for every set L that satisfies mild technical conditions, the worst-case POA of pure Nash equilibria in atomic splittable congestion games with cost functions in L is exactly the smallest upper bound provable using local smoothness arguments. In particular, the worst-case POA of pure Nash equilibria, mixed Nash equilibria, and correlated equilibria coincide in such games."
]
}
|
1203.5155
|
1561775979
|
We consider a general class of Bayesian Games where each players utility depends on his type (possibly multidimensional) and on the strategy profile and where players' types are distributed independently. We show that if their full information version for any fixed instance of the type profile is a smooth game then the Price of Anarchy bound implied by the smoothness property, carries over to the Bayes-Nash Price of Anarchy. We show how some proofs from the literature (item bidding auctions, greedy auctions) can be cast as smoothness proofs or be simplified using smoothness. For first price item bidding with fractionally subadditive bidders we actually manage to improve by much the existing result Hassidim2011a from 4 to @math . This also shows a very interesting separation between first and second price item bidding since second price item bidding has PoA at least 2 even under complete information. For a larger class of Bayesian Games where the strategy space of a player also changes with his type we are able to show that a slightly stronger definition of smoothness also implies a Bayes-Nash PoA bound. We show how weighted congestion games actually satisfy this stronger definition of smoothness. This allows us to show that the inefficiency bounds of weighted congestion games known in the literature carry over to incomplete versions where the weights of the players are private information. We also show how an incomplete version of a natural class of monotone valid utility games, called effort market games are universally @math -smooth. Hence, we show that incomplete versions of effort market games where the abilities of the players and their budgets are private information has Bayes-Nash PoA at most 2.
|
There have also been several works on quantifying the inefficiency of incomplete information games, mainly in the context of auctions. A series of papers by Paes Leme and Tardos @cite_3 , Lucier and Paes Leme @cite_11 and @cite_12 studied the ineffficiency of Bayes-Nash equilibria of the generalized second price auction. Lucier and Borodin studied Bayes-Nash Equilibria of non-truthful auctions that are based on greedy allocation algorithms @cite_9 . A series of three papers, Christodoulou, Kovacs and Schapira @cite_2 , Bhawalkar and Roughgarden @cite_7 and Hassidim, Kaplan, Mansour, Nisan @cite_1 , studied the inefficiency of Bayes-Nash equilibria of non-truthful combinatorial auctions that are based on running simultaneous separate item auctions for each item. However, many of the results in this line of work where specific to the context and a unifying framework doesn't exist. Lucier and Paes Leme @cite_11 introduced the concept of semi-smoothness and showed that their proof for the inefficiency of the generalized second price auction falls into this category. However, semi-smoothness is a much more restrictive notion of smoothness than just requiring that every complete information instance of the game to be smooth.
|
{
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_12",
"@cite_11"
],
"mid": [
"2052719152",
"2077121235",
"2060346604",
"2022890659",
"1489333206",
"1967821428",
"1990656299"
],
"abstract": [
"We analyze the price of anarchy (POA) in a simple and practical non-truthful combinatorial auction when players have subadditive valuations for goods. We study the mechanism that sells every good in parallel with separate second-price auctions. We first prove that under a standard \"no overbidding\" assumption, for every subadditive valuation profile, every pure Nash equilibrium has welfare at least 50 of optimal --- i.e., the POA is at most 2. For the incomplete information setting, we prove that the POA with respect to Bayes-Nash equilibria is strictly larger than 2 --- an unusual separation from the full-information model --- and is at most 2 ln m, where m is the number of goods.",
"We study mechanisms for utilitarian combinatorial allocation problems, where agents are not assumed to be single-minded. This class of problems includes combinatorial auctions, multi-unit auctions, unsplittable flow problems, and others. We focus on the problem of designing mechanisms that approximately optimize social welfare at every Bayes-Nash equilibrium (BNE), which is the standard notion of equilibrium in settings of incomplete information. For a broad class of greedy approximation algorithms, we give a general black-box reduction to deterministic mechanisms with almost no loss to the approximation ratio at any BNE. We also consider the special case of Nash equilibria in full-information games, where we obtain tightened results. This solution concept is closely related to the well-studied price of anarchy. Furthermore, for a rich subclass of allocation problems, pure Nash equilibria are guaranteed to exist for our mechanisms. For many problems, the approximation factors we obtain at equilibrium improve upon the best known results for deterministic truthful mechanisms. In particular, we exhibit a simple deterministic mechanism for general combinatorial auctions that obtains an O(√m) approximation at every BNE.",
"We study markets of indivisible items in which price-based (Walrasian) equilibria often do not exist due to the discrete non-convex setting. Instead we consider Nash equilibria of the market viewed as a game, where players bid for items, and where the highest bidder on an item wins it and pays his bid. We first observe that pure Nash-equilibria of this game excatly correspond to price-based equilibiria (and thus need not exist), but that mixed-Nash equilibria always do exist, and we analyze their structure in several simple cases where no price-based equilibrium exists. We also undertake an analysis of the welfare properties of these equilibria showing that while pure equilibria are always perfectly efficient (“first welfare theorem”), mixed equilibria need not be, and we provide upper and lower bounds on their amount of inefficiency.",
"The Generalized Second Price Auction has been the main mechanism used by search companies to auction positions for advertisements on search pages. In this paper we study the social welfare of the Nash equilibria of this game in various models. In the full information setting, socially optimal Nash equilibria are known to exist (i.e., the Price of Stability is 1). This paper is the first to prove bounds on the price of anarchy, and to give any bounds in the Bayesian setting. Our main result is to show that the price of anarchy is small assuming that all bidders play un-dominated strategies. In the full information setting we prove a bound of 1.618 for the price of anarchy for pure Nash equilibria, and a bound of 4 for mixed Nash equilibria. We also prove a bound of 8 for the price of anarchy in the Bayesian setting, when valuations are drawn independently, and the valuation is known only to the bidder and only the distributions used are common knowledge. Our proof exhibits a combinatorial structure of Nash equilibria and uses this structure to bound the price of anarchy. While establishing the structure is simple in the case of pure and mixed Nash equilibria, the extension to the Bayesian setting requires the use of novel combinatorial techniques that can be of independent interest.",
"We study the following Bayesian setting: mitems are sold to nselfish bidders in mindependent second-price auctions. Each bidder has a privatevaluation function that expresses complex preferences over allsubsets of items. Bidders only have beliefsabout the valuation functions of the other bidders, in the form of probability distributions. The objective is to allocate the items to the bidders in a way that provides a good approximation to the optimal social welfare value. We show that if bidders have submodular valuation functions, then every Bayesian Nash equilibrium of the resulting game provides a 2-approximation to the optimal social welfare. Moreover, we show that in the full-information game a pure Nash always exists and can be found in time that is polynomial in both mand n.",
"In sponsored search auctions, advertisers compete for a number of available advertisement slots of different quality. The auctioneer decides the allocation of advertisers to slots using bids provided by them. Since the advertisers may act strategically and submit their bids in order to maximize their individual objectives, such an auction naturally defines a strategic game among the advertisers. In order to quantify the efficiency of outcomes in generalized second price auctions, we study the corresponding games and present new bounds on their price of anarchy, improving the recent results of Paes Leme and Tardos [16] and Lucier and Paes Leme [13]. For the full information setting, we prove a surprisingly low upper bound of 1.282 on the price of anarchy over pure Nash equilibria. Given the existing lower bounds, this bound denotes that the number of advertisers has almost no impact on the price of anarchy. The proof exploits the equilibrium conditions developed in [16] and follows by a detailed reasoning about the structure of equilibria and a novel relation of the price of anarchy to the objective value of a compact mathematical program. For more general equilibrium classes (i.e., mixed Nash, correlated, and coarse correlated equilibria), we present an upper bound of 2.310 on the price of anarchy. We also consider the setting where advertisers have incomplete information about their competitors and prove a price of anarchy upper bound of 3.037 over Bayes-Nash equilibria. In order to obtain the last two bounds, we adapt techniques of Lucier and Paes Leme [13] and significantly extend them with new arguments.",
"The Generalized Second Price (GSP) auction is the primary method by which sponsered search advertisements are sold. We study the performance of this auction in the Bayesian setting for players with correlated types. Correlation arises very naturally in the context of sponsored search auctions, especially as a result of uncertainty inherent in the behaviour of the underlying ad allocation algorithm. We demonstrate that the Bayesian Price of Anarchy of the GSP auction is bounded by @math , even when agents have arbitrarily correlated types. Our proof highlights a connection between the GSP mechanism and the concept of smoothness in games, which may be of independent interest. For the special case of uncorrelated (i.e. independent) agent types, we improve our bound to 2(1-1 e)-1 ≅ 3.16, significantly improving upon previously known bounds. Using our techniques, we obtain the same bound on the performance of GSP at coarse correlated equilibria, which captures (for example) a repeated-auction setting in which agents apply regret-minimizing bidding strategies. Moreoever, our analysis is robust against the presence of irrational bidders and settings of asymmetric information, and our bounds degrade gracefully when agents apply strategies that form only an approximate equilibrium."
]
}
|
1203.3953
|
1989430134
|
Motivated by applications in quantum chemistry and solid state physics, we apply general results from approximation theory and matrix analysis to the study of the decay properties of spectral projectors associated with large and sparse Hermitian matrices. Our theory leads to a rigorous proof of the exponential off-diagonal decay ("nearsightedness") for the density matrix of gapped systems at zero electronic temperature in both orthogonal and non-orthogonal representations, thus providing a firm theoretical basis for the possibility of linear scaling methods in electronic structure calculations for non-met allic systems. We further discuss the case of density matrices for met allic systems at positive electronic temperature. A few other possible applications are also discussed.
|
Contributions in the first group are typically due to researchers working in solid state and mathematical physics. These include the pioneering works of Kohn @cite_124 and des Cloizeaux @cite_50 , and the more recent papers by Nenciu @cite_91 , @cite_81 , and a group of papers by Prodan, Kohn, and collaborators @cite_6 @cite_79 @cite_118 .
|
{
"cite_N": [
"@cite_118",
"@cite_91",
"@cite_6",
"@cite_79",
"@cite_81",
"@cite_50",
"@cite_124"
],
"mid": [
"2142501550",
"2052587314",
"1988366206",
"2135775236",
"1965675553",
"",
"1991050432"
],
"abstract": [
"In an earlier paper, W. Kohn had qualitatively introduced the concept of “nearsightedness” of electrons in many-atom systems. It can be viewed as underlying such important ideas as Pauling's “chemical bond,” “transferability,” and Yang's computational principle of “divide and conquer.” It describes the fact that, for fixed chemical potential, local electronic properties, such as the density n(r), depend significantly on the effective external potential only at nearby points. Changes of that potential, no matter how large, beyond a distance R have limited effects on local electronic properties, which rapidly tend to zero as a function of R. In the present paper, the concept is first sharpened for representative models of uncharged fermions moving in external potentials, and then the effects of electron–electron interactions and of perturbing external charges are discussed.",
"A partial answer (Theorem 1 below) to a problem concerning analytic and periodic families of projections in Hilbert spaces is given. As a consequence the existence of exponentially localised Wannier functions corresponding to nondegenerated bands of arbitrary three-dimensional crystals is proved.",
"The concept of nearsightedeness of electronic matter (NEM) was introduced by W. Kohn in 1996 as the physical principal underlining Yang's electronic structure alghoritm of divide and conquer. It describes the fact that, for fixed chemical potential, local electronic properties at a point @math , like the density @math , depend significantly on the external potential @math only at nearby points. Changes @math of that potential, no matter how large , beyond a distance @math , have limited effects on local electronic properties, which tend to zero as function of @math . This remains true even if the changes in the external potential completely surrounds the point @math . NEM can be quantitatively characterized by the nearsightedness range, @math , defined as the smallest distance from @math , beyond which any change of the external potential produces a density change, at @math , smaller than a given @math . The present paper gives a detailed analysis of NEM for periodic met als and insulators in 1D and includes sharp, explicit estimates of the nearsightedness range. Since NEM involves arbitrary changes of the external potential, strong, even qualitative changes can occur in the system, such as the discretization of energy bands or the complete filling of the insulating gap of an insulator with continuum spectrum. In spite of such drastic changes, we show that @math has only a limited effect on the density, which can be quantified in terms of simple parameters of the unperturbed system.",
"This paper communicates recent results in the theory of complex symmetric operators and shows, through two non-trivial examples, their potential usefulness in the study of Schrodinger operators. In particular, we propose a formula for computing the norm of a compact complex symmetric operator. This observation is applied to two concrete problems related to quantum mechanical systems. First, we give sharp estimates on the exponential decay of the resolvent and the single-particle density matrix for Schrodinger operators with spectral gaps. Second, we provide new ways of evaluating the resolvent norm for Schrodinger operators appearing in the complex scaling theory of resonances.",
"The exponential localization of Wannier functions in two or three dimensions is proven for all insulators that display time-reversal symmetry, settling a long-standing conjecture. Our proof relies on the equivalence between the existence of analytic quasi-Bloch functions and the nullity of the Chern numbers (or of the Hall current) for the system under consideration. The same equivalence implies that Chern insulators cannot display exponentially localized Wannier functions. An explicit condition for the reality of the Wannier functions is identified.",
"",
"The one-dimensional Schr \"odinger equation with a periodic and symmetric potential is considered, under the assumption that the energy bands do not intersect. The Bloch waves, @math , and energy bands, @math , are studied as functions of the complex variable, @math . In the complex plane, they are branches of multivalued analytic and periodic functions, @math , and @math , with branch points, @math , off the real axis. A simple procedure is described for locating the branch points. Application is made to the power series and Fourier series developments of these functions. The analyticity and periodicity of @math has some consequences for the form of the Wannier functions. In particular, it is shown that for each band there exists one and only one Wannier function which is real, symmetric or antisymmetric under an appropriate reflection, and falling off exponentially with distance. The rate of falloff is determined by the distance of the branch points @math from the real axis."
]
}
|
1203.3953
|
1989430134
|
Motivated by applications in quantum chemistry and solid state physics, we apply general results from approximation theory and matrix analysis to the study of the decay properties of spectral projectors associated with large and sparse Hermitian matrices. Our theory leads to a rigorous proof of the exponential off-diagonal decay ("nearsightedness") for the density matrix of gapped systems at zero electronic temperature in both orthogonal and non-orthogonal representations, thus providing a firm theoretical basis for the possibility of linear scaling methods in electronic structure calculations for non-met allic systems. We further discuss the case of density matrices for met allic systems at positive electronic temperature. A few other possible applications are also discussed.
|
In @cite_124 , Kohn proved the rapid decay of the Wannier functions for one-dimensional, one-particle Schr "odinger operators with periodic and symmetric potentials with non-intersecting energy bands. This type of Hamiltonian describes one-dimensional, centrosymmetric crystals. Kohn's main result takes the following form: where @math denotes a Wannier function (here @math is the distance from the center of symmetry) and @math is a suitable positive constant. In the same paper (page 820) Kohn also points out that for free electrons (not covered by his theory, which deals only with insulators) the decay is very slow, being like @math .
|
{
"cite_N": [
"@cite_124"
],
"mid": [
"1991050432"
],
"abstract": [
"The one-dimensional Schr \"odinger equation with a periodic and symmetric potential is considered, under the assumption that the energy bands do not intersect. The Bloch waves, @math , and energy bands, @math , are studied as functions of the complex variable, @math . In the complex plane, they are branches of multivalued analytic and periodic functions, @math , and @math , with branch points, @math , off the real axis. A simple procedure is described for locating the branch points. Application is made to the power series and Fourier series developments of these functions. The analyticity and periodicity of @math has some consequences for the form of the Wannier functions. In particular, it is shown that for each band there exists one and only one Wannier function which is real, symmetric or antisymmetric under an appropriate reflection, and falling off exponentially with distance. The rate of falloff is determined by the distance of the branch points @math from the real axis."
]
}
|
1203.3953
|
1989430134
|
Motivated by applications in quantum chemistry and solid state physics, we apply general results from approximation theory and matrix analysis to the study of the decay properties of spectral projectors associated with large and sparse Hermitian matrices. Our theory leads to a rigorous proof of the exponential off-diagonal decay ("nearsightedness") for the density matrix of gapped systems at zero electronic temperature in both orthogonal and non-orthogonal representations, thus providing a firm theoretical basis for the possibility of linear scaling methods in electronic structure calculations for non-met allic systems. We further discuss the case of density matrices for met allic systems at positive electronic temperature. A few other possible applications are also discussed.
|
A few observations are in order: first, the decay result ) is asymptotic, that is, it implies fast decay at sufficiently large distances @math only. Second, ) is consistent not only with strict exponential decay, but also with decay of the form @math where @math is arbitrary (positive or negative) and @math . Hence, the actual decay could be faster, but also slower, than exponential. Since the result in ) provides only an estimate (rather than an upper bound) for the density matrix in real space, it is not easy to use in actual calculations. To be fair, such practical aspects were not discussed by Kohn until much later (see, e.g., @cite_12 ). Also, later work showed that the asymptotic regime is achieved already for distances of the order of 1-2 lattice constants, and helped clarify the form of the power-law prefactor, as discussed below.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2072565812"
],
"abstract": [
"A widely applicable nearsightedness'' principle is first discussed as the physical basis for the existence of computational methods scaling linearly with the number of atoms. This principle applies to the one particle density matrix @math but not to individual eigenfunctions. A variational principle for @math is derived in which, by the use of a penalty functional @math , the (difficult) idempotency of @math need not be assured in advance but is automatically achieved. The method applies to both insulators and met als."
]
}
|
1203.3953
|
1989430134
|
Motivated by applications in quantum chemistry and solid state physics, we apply general results from approximation theory and matrix analysis to the study of the decay properties of spectral projectors associated with large and sparse Hermitian matrices. Our theory leads to a rigorous proof of the exponential off-diagonal decay ("nearsightedness") for the density matrix of gapped systems at zero electronic temperature in both orthogonal and non-orthogonal representations, thus providing a firm theoretical basis for the possibility of linear scaling methods in electronic structure calculations for non-met allic systems. We further discuss the case of density matrices for met allic systems at positive electronic temperature. A few other possible applications are also discussed.
|
The next breakthrough came much more recently, when @cite_81 managed to prove localization of the Wannier functions for a broad class of insulators in arbitrary dimensions. The potentials considered by these authors are sufficiently general for the results to be directly applicable to DFT, both within the LDA and the GGA frameworks. The results in @cite_81 , however, also prove that for Chern insulators (i.e., insulators for which the Chern invariants , which characterize the band structure, are non-vanishing) the Wannier functions do not decay exponentially, therefore leaving open the question of proving the decay of the density matrix in this case @cite_57 . It should be mentioned that the mathematics in @cite_81 is fairly sophisticated, and requires some knowledge of modern differential geometry and topology.
|
{
"cite_N": [
"@cite_57",
"@cite_81"
],
"mid": [
"2121813164",
"1965675553"
],
"abstract": [
"We study the behavior of several physical properties of the Haldane model as the system undergoes its transition from the normal-insulator to the Chern-insulator phase. We find that the density matrix has exponential decay in both insulating phases, while having a power-law decay, more characteristic of a met allic system, precisely at the phase boundary. The total spread of the maximally localized Wannier functions is found to diverge in the Chern-insulator phase. However, its gauge-invariant part, related to the localization length of Resta and Sorella, is finite in both insulating phases and diverges as the phase boundary is approached. We also clarify how the usual algorithms for constructing Wannier functions break down as one crosses into the Chern-insulator region of the phase diagram.",
"The exponential localization of Wannier functions in two or three dimensions is proven for all insulators that display time-reversal symmetry, settling a long-standing conjecture. Our proof relies on the equivalence between the existence of analytic quasi-Bloch functions and the nullity of the Chern numbers (or of the Hall current) for the system under consideration. The same equivalence implies that Chern insulators cannot display exponentially localized Wannier functions. An explicit condition for the reality of the Wannier functions is identified."
]
}
|
1203.3953
|
1989430134
|
Motivated by applications in quantum chemistry and solid state physics, we apply general results from approximation theory and matrix analysis to the study of the decay properties of spectral projectors associated with large and sparse Hermitian matrices. Our theory leads to a rigorous proof of the exponential off-diagonal decay ("nearsightedness") for the density matrix of gapped systems at zero electronic temperature in both orthogonal and non-orthogonal representations, thus providing a firm theoretical basis for the possibility of linear scaling methods in electronic structure calculations for non-met allic systems. We further discuss the case of density matrices for met allic systems at positive electronic temperature. A few other possible applications are also discussed.
|
Further papers of interest include the work by Prodan, Kohn, and collaborators @cite_6 @cite_79 @cite_118 . From the mathematical standpoint, the most satisfactory results are perhaps those presented in @cite_79 . In this paper, the authors use norm estimates for complex symmetric operators in Hilbert space to obtain sharp exponential decay estimates for the resolvents of rather general Hamiltonians with spectral gap. Using the contour integral representation formula ), these estimates yield (for sufficiently large separations) exponential spatial decay bounds of the form for a broad class of insulators. A lower bound on the decay rate @math (also known as the decay length or inverse correlation length ) is derived, and the behavior of @math as a function of the spectral gap @math is examined.
|
{
"cite_N": [
"@cite_79",
"@cite_118",
"@cite_6"
],
"mid": [
"2135775236",
"2142501550",
"1988366206"
],
"abstract": [
"This paper communicates recent results in the theory of complex symmetric operators and shows, through two non-trivial examples, their potential usefulness in the study of Schrodinger operators. In particular, we propose a formula for computing the norm of a compact complex symmetric operator. This observation is applied to two concrete problems related to quantum mechanical systems. First, we give sharp estimates on the exponential decay of the resolvent and the single-particle density matrix for Schrodinger operators with spectral gaps. Second, we provide new ways of evaluating the resolvent norm for Schrodinger operators appearing in the complex scaling theory of resonances.",
"In an earlier paper, W. Kohn had qualitatively introduced the concept of “nearsightedness” of electrons in many-atom systems. It can be viewed as underlying such important ideas as Pauling's “chemical bond,” “transferability,” and Yang's computational principle of “divide and conquer.” It describes the fact that, for fixed chemical potential, local electronic properties, such as the density n(r), depend significantly on the effective external potential only at nearby points. Changes of that potential, no matter how large, beyond a distance R have limited effects on local electronic properties, which rapidly tend to zero as a function of R. In the present paper, the concept is first sharpened for representative models of uncharged fermions moving in external potentials, and then the effects of electron–electron interactions and of perturbing external charges are discussed.",
"The concept of nearsightedeness of electronic matter (NEM) was introduced by W. Kohn in 1996 as the physical principal underlining Yang's electronic structure alghoritm of divide and conquer. It describes the fact that, for fixed chemical potential, local electronic properties at a point @math , like the density @math , depend significantly on the external potential @math only at nearby points. Changes @math of that potential, no matter how large , beyond a distance @math , have limited effects on local electronic properties, which tend to zero as function of @math . This remains true even if the changes in the external potential completely surrounds the point @math . NEM can be quantitatively characterized by the nearsightedness range, @math , defined as the smallest distance from @math , beyond which any change of the external potential produces a density change, at @math , smaller than a given @math . The present paper gives a detailed analysis of NEM for periodic met als and insulators in 1D and includes sharp, explicit estimates of the nearsightedness range. Since NEM involves arbitrary changes of the external potential, strong, even qualitative changes can occur in the system, such as the discretization of energy bands or the complete filling of the insulating gap of an insulator with continuum spectrum. In spite of such drastic changes, we show that @math has only a limited effect on the density, which can be quantified in terms of simple parameters of the unperturbed system."
]
}
|
1203.3953
|
1989430134
|
Motivated by applications in quantum chemistry and solid state physics, we apply general results from approximation theory and matrix analysis to the study of the decay properties of spectral projectors associated with large and sparse Hermitian matrices. Our theory leads to a rigorous proof of the exponential off-diagonal decay ("nearsightedness") for the density matrix of gapped systems at zero electronic temperature in both orthogonal and non-orthogonal representations, thus providing a firm theoretical basis for the possibility of linear scaling methods in electronic structure calculations for non-met allic systems. We further discuss the case of density matrices for met allic systems at positive electronic temperature. A few other possible applications are also discussed.
|
Among the papers in the second group, we mention @cite_67 @cite_55 @cite_48 @cite_116 @cite_62 @cite_69 @cite_125 . These papers provide quantitative decay estimates for the density matrix, either based on fairly rigorous analyses of special cases, or on not fully rigorous discussions of general situations. Large use is made of approximations, asymptotics, heuristics and physically motivated assumptions, and the results are often validated by numerical calculations. Also, it is occasionally stated that while the results were derived in the case of simplified models, the conclusions should be valid in general. Several of these authors emphasize the difficulty of obtaining rigorous results for general systems in arbitrary dimension. In spite of not being fully rigorous from a mathematical point of view, these contributions are certainly very valuable and seem to have been broadly accepted by physicists and chemists. We note, however, that the results in these papers usually take the form of order-of-magnitude estimates for the density matrix @math in real space, valid for sufficiently large separations @math , rather than strict upper bounds. As said before of Kohn's results, this type of estimates may be difficult to use for computational purposes.
|
{
"cite_N": [
"@cite_67",
"@cite_62",
"@cite_69",
"@cite_48",
"@cite_55",
"@cite_125",
"@cite_116"
],
"mid": [
"",
"2003800740",
"2113726570",
"2005062393",
"2100781154",
"2065748556",
"2054686930"
],
"abstract": [
"",
"The cost of Hartree−Fock and local correlation methods is strongly dependent on the locality of the one-particle density matrix and localized orbitals. In this paper the locality and sparsity of the one-particle density matrix is investigated numerically and theoretically, primarily at the Hartree−Fock level, for linear alkanes containing up to 320 carbon atoms. A method for the calculation of localized, atom-centered, occupied orbitals is presented and compared with the Boys' localization procedure. The atom-centered orbitals are ideally suited for use in local-correlation calculations. The connection between the size of optimally localized orbitals, the locality of the density matrix, and the onset of linear scaling is investigated.",
"Analytic results for the asymptotic decay of the electron density matrix in insulators have been obtained in all three dimensions ( @math ) for a tight-binding model defined on a simple cubic lattice. The anisotropic decay length is shown to be dependent on the energy parameters of the model. The existence of the power-law prefactor, @math , is demonstrated.",
"",
"The spatial decay properties of Wannier functions and related quantities have been investigated using analytical and numerical methods. We find that the form of the decay is a power law times an exponential, with a particular power-law exponent that is universal for each kind of quantity. In one dimension we find an exponent of @math for Wannier functions, @math for the density matrix and for energy matrix elements, and @math or @math for different constructions of nonorthonormal Wannier-like functions.",
"Analytical results for the asymptotic spatial decay of the density matrix p(r,r') in the tight-binding model of the two-dimensional met al are presented. In various dimensions D, it is found analytically and numerically that the density matrix decays with distance according to the power law ρ(r,r')‖r-r'‖ - ( D + 1 ) 2 .",
"We provide a tight-binding model of insulator, for which we derive an exact analytic form of the one-body density matrix and its large-distance asymptotics in dimensions @math . The system is built out of a band of single-particle orbitals in a periodic potential. Breaking of the translational symmetry of the system results in two bands, separated by a direct gap whose width is proportional to the unique energy parameter of the model. The form of the decay is a power law times an exponential. We determine the power in the power law and the correlation length in the exponential, versus the lattice direction, the direct-gap width, and the lattice dimension. In particular, the obtained exact formulae imply that in the diagonal direction of the square lattice the inverse correlation length vanishes linearly with the vanishing gap, while in non-diagonal directions, the linear scaling is replaced by the square root one. Independently of direction, for sufficiently large gaps the inverse correlation length grows logarithmically with the gap width."
]
}
|
1203.3953
|
1989430134
|
Motivated by applications in quantum chemistry and solid state physics, we apply general results from approximation theory and matrix analysis to the study of the decay properties of spectral projectors associated with large and sparse Hermitian matrices. Our theory leads to a rigorous proof of the exponential off-diagonal decay ("nearsightedness") for the density matrix of gapped systems at zero electronic temperature in both orthogonal and non-orthogonal representations, thus providing a firm theoretical basis for the possibility of linear scaling methods in electronic structure calculations for non-met allic systems. We further discuss the case of density matrices for met allic systems at positive electronic temperature. A few other possible applications are also discussed.
|
Finally, as representatives of the third group of papers we select @cite_78 and @cite_32 . The authors of @cite_78 use the Fermi--Dirac approximation of the density matrix and consider its expansion in the Chebyshev basis. From an estimate of the rate of decay of the coefficients of the Chebyshev expansion of @math , they obtain estimates for the number of terms needed to satisfy a prescribed error in the approximation of the density matrix. In turn, this yields estimates for the rate of decay as a function of the extreme eigenvalues and spectral gap of the discrete Hamiltonian. Because of some ad hoc assumptions and the many approximations used the arguments in this paper cannot be considered mathematically rigorous, and the estimates thus obtained are not always accurate. Nevertheless, the idea of using a polynomial approximation for the Fermi--Dirac function and the observation that exponential decay of the expansion coefficients implies exponential decay in the (approximate) density matrix is quite valuable and, as we show in this paper, can be made fully rigorous.
|
{
"cite_N": [
"@cite_32",
"@cite_78"
],
"mid": [
"2092769641",
"2013308529"
],
"abstract": [
"We compute the single-particle density matrix in large (500-, 512-, and 1000-atom) models of fcc aluminum and crystalline (diamond) and amorphous silicon and carbon. We use an approximate density functional Hamiltonian in the local density approximation. The density matrix for fcc aluminum is found to closely approximate the results for jellium, and the crystalline and amorphous insulators exhibit exponential decay albeit with pronounced anisotropy. We compare the computed decays to existing predictions of the fall off of the density matrix in insulators and find that the open_quotes tight-binding close_quotes prediction of Kohn [W. Kohn, Phys. Rev. >115, 809 (1959)] provides the best overall fit to our calculations for Si and C.",
"The range and sparsity of the one-electron density matrix (DM) in density functional theory is studied for large systems using the analytical properties of its Chebyshev expansion. General estimates of the range of the DM are derived, showing that the range is inversely proportional to the square root of an insulator band gap and inversely proportional to the square root of the temperature. These findings support open_quotes principle of nearsightedness close_quotes introduced recently by W. Kohn [Phys.Rev.Lett. bold 76 , 3168 (1996)]. These estimates are used to study the complexity of several linear system-size scaling electronic structure algorithms which differ in their dependence on the geometric dimensionality of the system. copyright ital 1997 ital The American Physical Society"
]
}
|
1203.3953
|
1989430134
|
Motivated by applications in quantum chemistry and solid state physics, we apply general results from approximation theory and matrix analysis to the study of the decay properties of spectral projectors associated with large and sparse Hermitian matrices. Our theory leads to a rigorous proof of the exponential off-diagonal decay ("nearsightedness") for the density matrix of gapped systems at zero electronic temperature in both orthogonal and non-orthogonal representations, thus providing a firm theoretical basis for the possibility of linear scaling methods in electronic structure calculations for non-met allic systems. We further discuss the case of density matrices for met allic systems at positive electronic temperature. A few other possible applications are also discussed.
|
Finally, in @cite_32 the authors present the results of numerical calculations for various insulators in order to gain some insight on the dependence of the decay length on the gap. Their experiments confirm that the decay behavior of @math can be strongly anisotropic, and that different rates of decay may occur in different directions; this is consistent with the analytical results in @cite_116 .
|
{
"cite_N": [
"@cite_116",
"@cite_32"
],
"mid": [
"2054686930",
"2092769641"
],
"abstract": [
"We provide a tight-binding model of insulator, for which we derive an exact analytic form of the one-body density matrix and its large-distance asymptotics in dimensions @math . The system is built out of a band of single-particle orbitals in a periodic potential. Breaking of the translational symmetry of the system results in two bands, separated by a direct gap whose width is proportional to the unique energy parameter of the model. The form of the decay is a power law times an exponential. We determine the power in the power law and the correlation length in the exponential, versus the lattice direction, the direct-gap width, and the lattice dimension. In particular, the obtained exact formulae imply that in the diagonal direction of the square lattice the inverse correlation length vanishes linearly with the vanishing gap, while in non-diagonal directions, the linear scaling is replaced by the square root one. Independently of direction, for sufficiently large gaps the inverse correlation length grows logarithmically with the gap width.",
"We compute the single-particle density matrix in large (500-, 512-, and 1000-atom) models of fcc aluminum and crystalline (diamond) and amorphous silicon and carbon. We use an approximate density functional Hamiltonian in the local density approximation. The density matrix for fcc aluminum is found to closely approximate the results for jellium, and the crystalline and amorphous insulators exhibit exponential decay albeit with pronounced anisotropy. We compare the computed decays to existing predictions of the fall off of the density matrix in insulators and find that the open_quotes tight-binding close_quotes prediction of Kohn [W. Kohn, Phys. Rev. >115, 809 (1959)] provides the best overall fit to our calculations for Si and C."
]
}
|
1203.3953
|
1989430134
|
Motivated by applications in quantum chemistry and solid state physics, we apply general results from approximation theory and matrix analysis to the study of the decay properties of spectral projectors associated with large and sparse Hermitian matrices. Our theory leads to a rigorous proof of the exponential off-diagonal decay ("nearsightedness") for the density matrix of gapped systems at zero electronic temperature in both orthogonal and non-orthogonal representations, thus providing a firm theoretical basis for the possibility of linear scaling methods in electronic structure calculations for non-met allic systems. We further discuss the case of density matrices for met allic systems at positive electronic temperature. A few other possible applications are also discussed.
|
To put our work further into perspective, we quote from two prominent researchers in the field of electronic structure, one a mathematician, the other a physicist. In his excellent survey @cite_90 Claude Le Bris, discussing the basis for linear scaling algorithms, i.e., the assumed sparsity of the density matrix, wrote (pages 402 and 404):
|
{
"cite_N": [
"@cite_90"
],
"mid": [
"2046434053"
],
"abstract": [
"We present the field of computational chemistry from the standpoint of numerical analysis. We introduce the most commonly used models and comment on their applicability. We briefly outline the results of mathematical analysis and then mostly concentrate on the main issues raised by numerical simulations. A special emphasis is laid on recent results in numerical analysis, recent developments of new methods and challenging open issues."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.